Benchmarking database servers on desktop hardware

Long time no post, again. I have about 10 drafts started that will probably never see the light of day since I abandoned them. On the other hand, I was quite busy last 18 months overclocking everything that I could get into my hands. As of now, I have 676 submissions to hwbot.org database. And suddenly I thought – how about combining those two, databases and overclocking?
The idea is – set up a simple database, prepare a schema, workload and start testing, first with default hardware settings, then overclocked. This would give an idea how much quicker would the workload run in different conditions. I don’t have server class equipment and – unfortunately – my workhorse benching motherboard just broke, but this would give me time to prepare. I’m aiming for OLTP as OLAP does not make much sense with my processing power and I’m considering following defaults – CPU is Intel Core i5 4690k running at 3.5 GHz, memory might vary but I’ll start with 2×2 GB DDR3-1600@9-9-9-27, storage is Samsung 850 EVO with 120 GB of capacity. The tests I plan to run will cover following scenarios:

  1. All default (to establish baseline)
  2. CPU default, memory overclocked by timing tightening to 1600@6-7-6-20
  3. CPU default, memory at 2000@7-8-7-22
  4. CPU default, memory at 2400@8-11-8-24
  5. CPU default, memory at max settings 2666@8-12-8-28
  6. CPU at 4 GHz + all 5 memory combinations
  7. CPU at 4.5 GHz + all 5 memory combinations
  8. CPU at 5 GHz + all 5 memory combinations
  9. CPU max will be around 5300, knowing its overclocking potential under single-stage cooling at around -50 degrees Celsius

I will have a starting point for comparing performance between SQL Server versions (from 2005 to 2016) and different hardware configurations (what happens if I switch to dual core Intel G3258 and overclock it? what happens when I change RAM configuration to 2×4 GB, 4×2 GB or 4×4 GB? what if the DB server runs in a virtual machine?), also leaving room for other experiments.
I’m not focusing on I/O here, since the SSD alone should be able to use all the bandwidth provided by the storage controller, I’m more interested in CPU/RAM subsystem performance. I need to think about a set of tests to be executed, so far I thought about:

  • 1 million inserts into a heap
  • retrieve 100k records from a table with scalar UDF execution (RBAR :P)
  • CLR vs built-in aggregates
  • compression/encryption
  • or simply just take HammerDB to run TPC-C workload

Pretty much enough to keep me busy during long autumn evenings. 🙂