Computers are a little like clockwork devices. A clock strikes a beat and a certain small
amount of work gets done. Just like the beginning piano player plays
to the beat of a metronome, computers run to the beat of an electronic clock. If you set the metronome too fast, the player
won’t have enough time to find the next piano key and the rendition will fall apart, or
at least sound pretty bad. Similarly if you set the clock rate of a CPU
too high, it will malfunction and the system will crash. This won’t necessarily damage the chip, it
just won’t work. Part of a computers design includes determining
the optimum clock rate. In some ways, the clock is like the coxswain
on a rowing team. He or she is the person who holds up the megaphone
and chants “Stroke, Stroke, Stroke!”. If the order is issued too quickly, the rowers
will get out of sync and the boat will slow down. In that sense, the coxswain is restricted
in dishing out commands no faster than the slowest rower. In a PC, many chips work to the beat of the
computer’s clock, and that means the clock can’t run faster than the slowest component,
which happens to be the most complex ones. This is usually the CPU and memory, although
they can run on independent clocks. The negotiation between CPU and memory was
originally performed by the memory controller hub, this later became the Northbridge chip
and most recently can be found integrated into the CPUs themselves. The clock rate is usually determined by the
frequency of an oscillator crystal, similar to the crystal in your watch. This produces a since wave which is translated
by electronic circuity into a square wave, and from there we can get binary commands. Either the wave is on, or off. After each clock pulse sent scooting through
the circuitry in varying directions depending on the commands given, the signal lines inside
the CPU need time to settle into their new state… meaning a transition either from
0 to 1 or from 1 to 0. If the next clock pulse comes too quickly,
then the results will be out of sync and incorrect. It’s the process of transitioning that produces
heat, and therefore the higher the clock rate, or number of transitions calculated within
a second, the more heat is produced. Originally CPU clocks were measured in Hertz
and KHz, with 1 KHz being a thousand ticks per second. Then in the 80s and 90s, CPU clocks generally
ticked at more than a million times per second; which equates to a 1MHz clock speed. Early PCs and XTs used a 4.77MHz clock and
the AT originally harnessed a 6MHz clock. These processors didn’t yield a lot of heat
and generally didn’t require heat sinks to dissipate the excess energy. Of course in more recent times, we’re more
familiar with GHz, with a 1GHz clock ticking at 1,000 million times per second. That’s some pretty mean progress, and also
a significant increase in heat energy. For many years the maximum possible CPU speed
determined a lot about the rest of the computer. Usually a manufacturer would design a motherboard
to operate at the same speed as the CPU. When CPUs rose in speed from 5MHz to 8MHz,
motherboards followed suit. All of the chips on the board had to be 8MHz
chips, including the memory. This was not only expensive, but it was pretty
impossible at the time to push them over 8MHz, so since 1984 motherboards have been designed
so that different parts can run at different speeds. This does mean that some speed was wasted,
but comprises must sometimes be met. But like processor speed, motherboard speeds
move with the times, and in 1989 motherboard speeds had been pushed to 33MHz. With Intel’s 50MHz 486 processor on the scene
this meant that a new strategy was required for the CPU to work with these still slower
boards. The original clock doubler was a special 486
that could plug straight into a 25MHz board and co-operate at that speed externally, but
internally run at 50MHz. This meant that numeric calculations, or internal
data transfer was carried out at double speed, but external operations were queued up at
the 25MHz rate. A 66MHz 486 was developed to also plug straight
into the 33MHz boards. This trend continued as processors got faster
and faster than motherboards, to clock triplers, quadruple, and today boards tend to run at
several hundred MHz with that speed factored up to match the CPU speed. As some boards allow, overclocking your motherboard
front side bus or it’s multiplication factor will therefore increase the speed of your
CPU. In modern multi-core processors, each core
runs at it’s own clock rate. So what then do these cycles of pulses mean? How does a cycle of electrons whizzing about
translate to what you see on your screen? Well, let’s take a 1kHz clock rate. Now a CPU running at this speed can flick
each of it’s binary switches one thousand times every second. After receiving input that flicks these switches
into a certain configuration, the CPU will then produce a result which can be cached
and used on it’s next cycle. This continues and once enough of these results
have been determined, we will have an instruction. It’s this instruction that might tell the
computer to place a pixel into memory. Cycle after cycle, instruction after instruction,
pixel by pixel, a picture is then pieced together in memory, before another instruction starts
the process of pushing that picture onto our screens. Now the number of cycles required per instruction
depends on what program you’re trying to execute or how the CPU is built, and it’s more likely
that we would now be dealing with instructions per cycle, rather than cycles per instruction,
simply due to the vast amounts of data a CPU can now handle and compute within each single
cycle. A core i7 can generally compute over 100,000
million instructions per second from a clock speed of just 3,000 million cycles per second
or 3GHz. This instruction rate is really what determines
how fast a given program will execute. The first electronic general purpose computer,
the ENIAC had a 100KHz clock rate and each instruction took 20 cycles, leading to an
instruction rate of 5KHz, or 5 thousand instructions per second. Of course due to the varying ways processors
are built and handle data, and the fact that different tasks will massively impact how
many instructions are completed in a given time frame, it’s far easier for us to compare
processor speed in Cycles per second, rather than instructions per second, or Millions
of instructions per second, known as MiPs, which are generally now just used for task
speed benchmarking.

Clockwork Computers [Byte Size] | Nostalgia Nerd
Tagged on:                                                 

52 thoughts on “Clockwork Computers [Byte Size] | Nostalgia Nerd

  • September 9, 2016 at 5:35 pm
    Permalink

    I feel cleverer now

    Reply
  • September 9, 2016 at 5:40 pm
    Permalink

    Lots of new information here. Thanks.

    Reply
  • September 9, 2016 at 5:42 pm
    Permalink

    This makes me remeber my teenage years and my overclock mania at the time. Great vid

    Reply
  • September 9, 2016 at 5:46 pm
    Permalink

    Awesome vid! thnks 😀

    Reply
  • September 9, 2016 at 5:48 pm
    Permalink

    "one thousand million" is A FUCKING BILLION

    Reply
  • September 9, 2016 at 5:59 pm
    Permalink

    Nice

    Reply
  • September 9, 2016 at 6:03 pm
    Permalink

    Very informative.

    Reply
  • September 9, 2016 at 6:11 pm
    Permalink

    Excellent explanation and video!

    Reply
  • September 9, 2016 at 7:00 pm
    Permalink

    12 years ago CPU's hit 3GHz. Today, we've barely gone over 4GHz. Lots of cores and efficiency in today's chips but I think we've hit the clock limit. We'll be stuck with 4-5Ghz chips for the next decade, at least, and only in high-end chips. Above 5GHz and the amount of heat generated is not worth the minimal performance gains compared to multi-cores and modern instruction sets.

    Reply
  • September 9, 2016 at 7:32 pm
    Permalink

    A Bad Influence, a Games Master, and a heady Byte-Size that I actually learned a few things from?
    Banner week, old man.

    Reply
  • September 9, 2016 at 7:41 pm
    Permalink

    4:26
    what am i looking at here? are those the physical switches inside the processor?

    Reply
  • September 9, 2016 at 7:46 pm
    Permalink

    The ultimate shutdown to someone goes "time is an illusion!" "Yeah, then how do computers work?"

    Reply
  • September 9, 2016 at 8:18 pm
    Permalink

    Downplaying MIPS? Seriously? Way to take credibility and chuck it out the window.

    Reply
  • September 9, 2016 at 8:34 pm
    Permalink

    the background music is really loud I can barley hear you

    Reply
  • September 9, 2016 at 8:43 pm
    Permalink

    Only really since the 2008 crash. Before a billion was a million million

    Reply
  • September 9, 2016 at 8:44 pm
    Permalink

    Loved the vid man. No idea how you get the time.

    Reply
  • September 9, 2016 at 9:27 pm
    Permalink

    Excellent explanation and great video

    Reply
  • September 9, 2016 at 10:14 pm
    Permalink

    this was actually a pretty good summary

    Reply
  • September 10, 2016 at 1:11 am
    Permalink

    love your documentary like videos. your a great story teller

    Reply
  • September 10, 2016 at 3:02 am
    Permalink

    I thought you were gonna show us a computer that was actually made with clockworks or something but… This is just as good, and very informative!

    Reply
  • September 10, 2016 at 12:08 pm
    Permalink

    Very good! I particularly enjoyed you almost losing it during, "stroke! stroke! stroke!"

    Reply
  • September 10, 2016 at 3:05 pm
    Permalink

    Cheers lad, you tackled a very complicated subject like a champ.

    Reply
  • September 11, 2016 at 7:31 pm
    Permalink

    I noticed that you showed the Zuse Z3 when you mentioned the ENIAC computer.

    Reply
  • September 14, 2016 at 9:39 am
    Permalink

    man I remember having to calculate cycles, back when I use to do assembly programming on my amiga and MSX. As fun as this was, I'm glad we don't have to worry about things like that anymore 🙂

    Reply
  • September 18, 2016 at 12:00 am
    Permalink

    That's a lot of info crammed into one video but put together really well. No one can say they didn't learn something.

    Reply
  • September 21, 2016 at 1:55 pm
    Permalink

    do a video on so called fixed point maths. 8.8 fp or 16.16. it used to be essential for games before pentium, even if you had some kind of fpu.

    Reply
  • October 3, 2016 at 6:00 am
    Permalink

    Then we have graphics memory technology, which often runs at a far higher effective clock than a CPU. On a modern GPU such as the GTX 1070, for example, the 8GB of internal GDDR5 memory uses a quadrupler to essentially push an 8GHz effective clock. Some models will let you overclock this to over 9GHz relatively easily.

    Reply
  • October 11, 2016 at 4:43 pm
    Permalink

    i wish i could overclock my cpu to 4,77 ghz

    Reply
  • October 24, 2016 at 2:00 pm
    Permalink

    6969 views

    Reply
  • October 26, 2016 at 9:21 am
    Permalink

    Really concise and easy to follow explanation on topic rarely explained. Very well done sir!

    Reply
  • November 25, 2016 at 11:28 am
    Permalink

    i just stroked my Coxswain to the beat of the great music in this video.

    Reply
  • January 16, 2017 at 11:06 am
    Permalink

    Surprisingly the DX4 does actually not quadruple the clock speed. It rather tipples it. Why it was called the DX4 i don´t know.

    Reply
  • February 12, 2017 at 5:16 pm
    Permalink

    As always a pleasure to watch 🙂

    Reply
  • February 18, 2017 at 10:00 pm
    Permalink

    I think you're getting too conceptual about a simple problem. The circuitry inside the electronics can only change states so quickly, before it may malfunction. The clock can never have and interval beyond or at this point. It may just glitch, in some cases a higher clock can even work, but generate hazardous heat. Great to see a utube channel without monetisation, can watch all these without interruption.

    Reply
  • March 1, 2017 at 10:08 pm
    Permalink

    Ghz is obs-elite in 2017, sos.

    Reply
  • March 3, 2017 at 8:10 pm
    Permalink

    Next up photonic integrated circuit

    Reply
  • April 10, 2017 at 4:43 pm
    Permalink

    Freakin amazing! 🙂

    Reply
  • April 11, 2017 at 10:27 pm
    Permalink

    Coxswain's don't command the timing of a boat. They shout commands of when to start and stop and they steer the boat. It is up to the rowers themselves to keep in time. However the coxswain will shout of someone is out of time. Even if it is a fraction of a second.

    Reply
  • May 8, 2017 at 3:17 pm
    Permalink

    So Clock Doubling X Dancing in a Colorful Shiny Space Suit = Task Speed Benchmarking?

    Reply
  • May 14, 2017 at 2:02 am
    Permalink

    Pretty interesting. But why are you showing a Zuse Z3 while talking about the ENIAC?

    Reply
  • July 12, 2017 at 11:59 pm
    Permalink

    aMAZEing!

    Reply
  • September 1, 2017 at 6:05 pm
    Permalink

    5:02 i know this is an old video, and not worth correcting, but this is actually false.
    Transistor speed does not match clock speeds. Clock speeds are actually determined by the propagation delay of a cycle ( the longest stage in a pipelined design).
    It's actually why increasing pipeline length results in higher clock speeds.

    Reply
  • October 14, 2017 at 8:45 am
    Permalink

    The Brit's clumsy naming for large numbers is frustrating.

    Reply
  • January 12, 2018 at 5:57 am
    Permalink

    what is and where did you get the video at 5m34sec?

    Reply
  • June 1, 2018 at 10:18 pm
    Permalink

    a thousand million…do you mean a billion?

    Reply
  • September 17, 2018 at 3:01 am
    Permalink

    I thought this was going to be about Antikythera mechanism

    Reply
  • November 20, 2018 at 2:40 pm
    Permalink

    You stopped yourself from laughing when saying "stroke, stroke, stroke". I know what you were thinking about

    Reply
  • March 11, 2019 at 2:52 pm
    Permalink

    One of the things people miss: The signal is electricity. All the actions are powered by this. The digital signal is really high voltage-low voltage-high voltage…etc. The electronics are designed to do something when the voltage goes up, being powered by the voltage jump. They then drop into their new state when the voltage drops. They then do the next thing when the voltage jumps up again. The clock is more than just something to synchronize work, it powers it. The timing is critical, since the pulse creates a cascade of work, which all has to complete before the next pulse. That is why you have a limit on how fast any particular hardware works.

    Reply
  • April 21, 2019 at 11:51 pm
    Permalink

    My watch is mechanical……….just saying………..

    Reply
  • August 5, 2019 at 9:37 pm
    Permalink

    0:39 It will if you fuck with vCore! 😛

    Reply
  • October 17, 2019 at 12:09 pm
    Permalink

    It's funny because games like Crysis were expecting things like 8 ghz processors to become the norm 😀

    Reply
  • October 22, 2019 at 1:12 am
    Permalink

    2:25 Not really. The term "Hertz" was only made official in 1960, and only really came into prominence in the subsequent decades.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *