Hey everyone, time for another video. In 2012,
Intel released the Ivy Bridge line of processors for the 1155 socket, which were basically
a die shrink (from 32nm to 22nm) of the Sandy Bridge architecture. Some Ivy Bridge based
processors saw their launch in April and May of 2012 but it wasn’t until September that
year that they launched the Pentiums based on Ivy Bridge. One of which was the Pentium
G2120. A 55w tdp dual core processor with no turbo boosting or hyperthreading, which
ran at 3.1GHz and supported DDR3 memory up to a maximum selectable frequency of 1600MHz
(up from the 1333MHz max of Sandy Bridge pentiums.) The G2120, featured I think around 634 million
transistors (although I couldn’t actually find a definitive number) built on the previously
mentioned 22nm fabrication process, and it also had 64KB of L1 cache per core, 256KB
of L2 cache per core and 3MB of L3 cache which was shared between both of the pentiums cores.
It does also have integrated graphics which ran at a maximum frequency of 1.05GHz but
we don’t really care about that for this video. Originally, the G2120’s official launch price
was US$75 (actual prices may have varied), which today is around US$83 today, adjusting
for inflation, which is also around £63 or €73. I bought it for only £4, however,
at CEX in the UK, you can buy it for £5 when it comes back in stock, or for as little as
US$9.25 on Ebay in America at the time of writing. As usual, I’ll be putting the Pentium through
its paces in some games and some benchmarks as well (including the new for this video
Cinebench R20) to get an idea of just how the G2120 performs in 2019. The rest of the system I’ll be using today
features a Asrock Z68 Pro3 motherboard with 8GB of DDR3 RAM @ 1600MHz, a MSI GTX 1080
Armor OC Edition graphics card to eliminate any potential bottlenecks, Windows 7 Ultimate
64-bit and a Phanteks TC14PE cooler to keep the Pentium cool. So let’s get on with the
first test! First up is the newly released Cinebench R20,
a benchmarking tool based on the engine from Maxon’s Cinema 4D software. It uses all of
your processor’s cores to render a photo-realistic 3D scene and presents a score at the end based
on how long it took to render the image. I’ll be running the benchmark 3 times and averaging
the scores to present a reliable representation of what the G2120 is capable of. As it’s a
new benchmark for the channel, I don’t have any comparison scores, although there are
some built in scores from Maxon’s own testing prior to release. At the Pentium G2120’s stock
speed of 3.1GHz, it managed a scores of 452, 452 and 453, for an average of 452.33 points.
Overclocking wise, Ivy Bridge processors with locked multipliers face the same issues with
overclocking that Sandy Bridge processors do, in that how far you can raise the base
clock is very limited. I did however manage to increase the base clock from 100-106.2MHz
before the PC wouldn’t boot. This gave a frequency of 3.29GHz and a memory clock of 1698.2MHz.
With the small overclock, the scores did increase a little, with the 3 tests scoring 479, 480
and 485 respectively, for an average of 481.33, a 6.4% increase in score. Firestrike Physics is a part of the DX11 Firestrike
benchmark in the 3DMark suite of tests (commonly used by overclockers and other tech YouTubers)
and is heavily dependant on the processor and not the graphics card. And like Cinebench,
I’ll also be running Firestrike Physics a total of 3 times and averaging those scores
out to give a good representation of what the G2120 is capable of, at least in synthetic
benchmarks like this. At stock speeds, the G2120 managed scores of 2971, 2966 and 2942,
for an average of 2959.67 points. With overclocking, if you expected a similar increase in scores
to Cinebench, you’d be right, as with the 3.29GHz overclock, the tests saw scores of
3160, 3169 and 3134 respectively, for an average score of 3154.33, a 6.58% increase over stock. To kick off the tests today, it’s the ever
popular GTA V, which had its much anticipated PC release in April of 2015, over a year after
the initial release on consoles. And despite its age, is still a great game to benchmark
today. I’m running the game at both stock and overclocked speeds at 1080p, using the
lowest settings possible with DirectX 11, to maintain consistency with all my previous
tests. One thing to note though, is that I had to reduce the base clock to 106.1MHz,
as 106.2MHz wasn’t stable. This reduced the processor speed to 3.288GHz and the memory
to 1697MHz. Before we even started, the game actually
crashed at stock clocks while loading into story mode, however, it never happened again
and didn’t happen at all with the overclock. Throughout the city, both showed some noticeable
issues with micro-stutter. Fps throughout the city sat around the mid 30s up to around
60fps, which is fairly similar to the overclock’s performance, although that managed to not
go below the low 40s. Neither showed any issues with input lag or input locking which is common
in this game with underpowered processors and both did in fact seem to run smooth at
points in the map, although, overall, both did show micro-stutter at points throughout
the whole test and not just in the city. Interestingly though, I noticed that with the overclock,
the G2120 actually showed slightly worse stutter than stock did. The fraps 15 min benchmark showed average,
1% and 0.1% low framerates of 47, 25 and 18fps respectively at stock speeds and there were
several spikes in frametimes throughout of around 40-60ms, with several more reaching
160ms at the worst. The overclock showed average, 1% and 0.1% lows of 52, 28 and 20fps respectively,
showing a minor improvement overall. It also showed several spikes in frametimes of around
100 to 280ms throughout the test. With Rise of the Tomb Raider, I was expecting
a poor performance overall given that it is the most modern game on the list of tests
today. I’m also running the game at 1080p with the lowest settings possible, using DirectX
11. And like GTA V, I had to leave the base clock at 106.2MHz to maintain stability in
the overclocked test. Stock clocks did indeed show a fair amount
of stutter, although admittedly not as much as I expected. At the start of both tests,
the game did actually appear to run smoothly at points although that didn’t last particularly
long and was the exception to how the rest of the game performed, as at times, stock
clocks showed some fairly significant stuttering, especially in areas of the Soviet Installation
and Copper Mill, which was also true for the overclock as well, albeit less so than stock
clocks. One positive from both tests though, is that combat was actually reasonably OK,
which makes a change to how hard it is on lower powered processors. The fraps 15 min benchmark showed average,
1% and 0.1% low framerates of 51, 19 and 9fps respectively, and according to the frametime
graph, there were several spikes in frametimes of around 40-240ms, with the most severe spike
reaching a whopping 1.42 seconds! The overlock on the other hand, showed 57, 22 and 10fps
for its average, 1% and 0.1% lows respectively, and showed spikes in frametimes of around
100-200ms at the worst, however, I’d personally say that the game, with the overclock, is
kind of playable if you can put up with the stutter. And despite being 7 years on from its initial
2012 release, Counter Strike: Global Offensive, or CSGO as it’s more commonly known, is still
massively popular and even has a fairly substantial professional scene today as well. It runs
on quite a wide range of hardware, so is a great game for benchmarking low powered or
older processors with. I’m running the test in a hard difficulty, competitive mode bot
match on the Mirage map at 1080p on the lowest settings possible to maintain consistency
with my previous CPU tests, and unlike GTA V and Rise of the Tomb Raider, CSGO ran perfectly
fine with the base clock at 106.2MHz, so is running at a slightly higher processor and
memory speed than what the previous 2 games managed. As I was expecting, the game runs fantastically,
even at stock speeds. Neither the stock or overclocked tests showed any noticeable stuttering
or locking up whatsoever, other than on one occasion with the overclock, in which I noticed
a brief judder. Performance was perfect otherwise though. Stock clocks managed fps in the range of the
low 50s up to around 125 frames per second depending on the location of the map you are
in, with the overclock managing to better this with fps in the range of 70-140 frames
per second. As mentioned, both ran perfectly fine with
no issues throughout the entire test, so there isn’t really much to say about the game other
than that stock clocks managed average, 1% and 0.1% lows of 98, 58 and 44fps respectively,
and despite having no noticeable stutter, the frametime graph does indeed show that
there was some, with spikes in frametimes of around 70-100ms throughout, although, they
are spaced so far apart that you won’t notice them either. The overclock on the other hand,
managed 110, 62 and 46fps for its average, 1% and 0.1% low framerates, and also had spikes
in frametimes throughout of around 50-80ms, which like stock, are spaced far enough apart
to be unnoticeable. Lastly for the tests today is Warframe, a
personal favourite of mine which is also free to play. It’s available from Steam or through
the Warframe website itself. This was also ran at 1080p on the lowest settings possible
using the directX 11 mode, although, from the Warframe launcher, you can select directX
10 mode as well, if you don’t have a DX11 capable graphics card. The test was run in
a 15 minute Survival mission on Jupiter. As you can see from the average, 1% and 0.1%
low framerates, the overclock, overall, actually performed a bit worse on paper than the stock
clocks. Although, in terms of the actual experience, the overclock did show better performance
at times. On the lander craft, stock clocks had the
fps hitting as high as the mid 270s and only dropping to around the mid 180s, whereas the
overclock never dropped below 200fps and hit almost 300 at points of the ship. Neither
showed any stuttering or locking up here at all and both ran pretty smoothly on the planet
selection screen as well. Throughout the mission itself though, the
game again, ran extremely well and both stayed well above 60fps for the entirety of the test,
with fps in the low 80s to around 135fps at times with stock clocks. The overclock though,
dropped to the mid 70s at times, but saying that, some points of the mission were actually
more intense in terms of the action on screen compared to my run with stock clocks, so performance
was probably extremely similar overall. Neither showed any stuttering or locking up
throughout the entire mission and both were extremely enjoyable and very playable. The fraps benchmark showed an average of 101fps
at stock clocks, with 1% and 0.1% lows of 65 and 55fps respectively, with no spikes
in frametimes other than on one occasion where there was a spike of 40ms. The overclock managed
the exact same average framerate but with slightly reduced 1% and 0.1% lows of 62 and
51fps respectively, showing that there were more dips in framerate, but as mentioned,
some points of the mission were more intense. There was also only 1 spike in frametimes
of 40ms, much like stock clocks. Overall, I was actually quite impressed at
how well the G2120 performed in some reasonably modern games. It was never intended for gaming,
and yet it offers some very playable levels of performance in games such as CSGO and Warframe.
More CPU intensive games such as Rise of the Tomb Raider, will struggle, although, even
with Rise, the performance with an overlock would be tolerable, at least to me anyway,
if the G2120 is all I had. GTA V as well, is also kind of playable too. For the £4
I paid for it, it was an absolute bargain, and would probably run a lot of older games
just fine, or at lest to a playable enough level. If you enjoyed this video, please consider
giving it a like and leaving a comment as well. I’d also really appreciate it if you
could share this with anyone you think may enjoy it. If you’d like to support me in creating
these videos, you could subscribe to my channel and maybe even consider supporting me through
my Patreon at pateron.com/benchinggaming, or through Ko-Fi at ko-fi.com/benchinggaming.
You don’t have to but I’d be eternally grateful if you did. Unfortunately, I’m struggling
to get my hands on more DDR2 RAM, so this may be the last video for a few weeks, at
least until CEX get’s their stuff together and manages to actually send what I ordered.
Smol rant aside, I hope you enjoyed the video and learned something from it, hopefully I’ll
see you in the next one, and if not, thank you for taking the time out of your day to
watch my content, I really appreciate it 🙂

Is an Ivy Bridge Pentium a Decent Budget CPU for Gaming? | Benching&Gaming
Tagged on:                                                                                                                                                                                                                                         

8 thoughts on “Is an Ivy Bridge Pentium a Decent Budget CPU for Gaming? | Benching&Gaming

  • March 26, 2019 at 2:19 pm

    Nearly 200 subs!!!! (Really good video by the way)

  • March 26, 2019 at 3:04 pm

    Have you tried Apex Legends yet?
    I'm playing around with the settings and videoconfig to try and get somewhere near competitive instead of just being bullet fodder.

    I'm running an i7 4770k at 4ghz on an Asrock Z87 pro3 board with 16gb (4×4) G.Skill Ripjaws DDR3 1600 and an EVGA 2gb GTX750ti.

    I'm Really struggling!
    I started out at native res for my TV/monitor (via HDMI @60hz) 1920×1080 and everything on low settings or off, and dying so fast because by the time I've raised my gun the enemy has emptied half a mag into me and had a cup of tea before I hit the ground.

    I've tried various resolutions and currently I'm using windowed custom 1408×7 something that relates back to true 16:9 scale so that it's not blurry like of I'd used 720.
    I've done the turning off shadows thing in videoconfig plus played about with LOD ( in Nv inspector) which is very frowned upon since it could be seen as cheating at least at 0x00000078, so I'm currently on 0x00000024 to keep model detail but not so much textile textures etc. It's almost playable, but it's somehow increased input lag, at least that's how it feels oddly.
    I don't intend to keep using that because I don't fancy being banned.

    I'd love to hear your views and ideas on improving things from here.
    I don't have the money to buy a new gfx card ( heck I only just got the 750ti over Christmas having upgraded from a 512mb HD5670 😂)
    If I was able to get a card it looks like it would be a 1660ti as it has a low(ish) power consumption, which is a factor for me as I'm on it a lot (pc) but don't have much of any spare income.

  • March 26, 2019 at 4:43 pm

    So much detail, your videos are amazing mate! Keep up the great work!

  • March 26, 2019 at 5:49 pm

    You can really see how much work goes into your videos Great Job! 🙂

  • March 26, 2019 at 10:49 pm

    Amazing benchmark as always keep going bro

  • April 3, 2019 at 11:44 am

    Try world of warships

  • April 3, 2019 at 11:47 am

    Dual core CPUs can still run games but quad cores are starting to become the bare minimum for gaming, but that will happen in a couple of years.

  • May 25, 2019 at 1:07 pm

    👍 Great review, as always.


Leave a Reply

Your email address will not be published. Required fields are marked *