free bootstrap theme

Watercooling A Desktop


A journey in computer design.

Senior Project Synopsis

The paper I wrote to document the process.

 I originally set out to build a computer and test its efficiency as well as its power usage. I also set out to learn a bit of HTML once the computer was tested and benchmarked using software like 3DMark and Prime95. My goals were to include “a custom water-cooling system with a pump and a reservoir to cool the processor” as well as, hopefully, “graphics [cooling], custom cable sleeving on the power supply and powerful components”.

    The most important component of a computer is the processor, as without it no calculations can take place and nothing can be accomplished. Typically, processors range in price from very cheap to extremely expensive, but most typical consumer processors fall around the 200-400 dollar range (D. C. Price). The low end processors are typically good enough for most consumers, while the high end market is for more niche uses, such as rendering video or rapidly calculating difficult mathematical problems. These higher end processors are dropping in price as time progresses, with Intel rolling out its least expensive enthusiast series processors yet (CPU World). These processors roll out in a tick-tock cycle, with the tick being a large jump and an architecture improvement, meaning a total design process change, while the tock is a smaller change that mostly results in better efficiency and reliability, rather than a large speed increase (Intel Corporation). The most recent desktop processor release is part of the tock cycle, but it has greater benefits due to the industry-wide rollout of DDR4 ram, ram modules manufactured at a greater density, which provides faster speeds and greater energy efficiencies. Although they are more expensive (Cambell) the most recent generation of Intel processors is a better choice for high end computing as they have long lasting benefits due to their use of newer memory technology and increase in cores. This increase in cores is sometimes not entirely useful, as some applications may not use the extra cores and are poorly optimized for an increase in threads. Competition between software components may occur, which can cause excessive slowness and issues for software that should have no trouble running normally (Hazelwood). This may seem like a problem that is not solvable, but modern software is becoming better and better at handling multiple cores, and I expect that within a few years almost all software will take advantage of more than 2 or 3 cores.

    The graphics card or GPU is the second most important component in high-end computing as it handles most graphical calculations and displays the screen. All computers have a graphics card, but many only have a graphics rendering chip built into the processor. This works well for machines that are used for basic needs, but when rendering video or displaying video games more power is required. This is where a dedicated graphics card comes in. GPUs have thousands of cores, while CPUs typically have a few fast cores that can do really complex tasks fast, GPUs can do really simple tasks super-fast, allowing them to render images in a much more efficient manner, especially when compared to the computer processor (NVidia Corporation). This makes them perfect for creating images or building incredibly complex worlds out of millions of tiny dots, called pixels (NVidia Corporation). Recent research papers show that lower GPU temperatures, combined with higher clock speeds (how fast each computation can occur, usually measured in megahertz or gigahertz) allow lower voltages and therefore much greater power efficiency than previously believed (D. C. Price). This finding has both large-scale implications as well as real world application, as better power efficiency is very important to both end-users and large corporations. It also means that cooling is becoming exceptionally important.

    Liquid cooling is becoming a much more viable method of removing heat from computer systems. With an increase in consumer availability of computer parts, as well as increased interest from manufacturers and consumers in the extreme computing market, far greater numbers of liquid-cooled PCs are being built now than ever before (Schmidt). Server technology is also being cooled with liquid more often, as it is quieter and typically more energy efficient than running fans. Fluids are able to take heat directly away from the source, without having to create air to blow it away. This mean that there is much more heat dissipation, resulting in lower operating temperatures and more headroom for increasing speeds. Typical water-cooling systems are both simple and also straightforward – a pump circulates fluid, typically distilled water or some other fluid with a high heat capacity, through a closed loop of tubes, which are in turn connected to “blocks”, metal plates attached to hardware components that produce heat. These blocks act as heat sinks to pull heat away from the vital components in the computer. In order to dissipate the now tepid water, radiators are used, similar to how a car works, with fans that push the heat away. In some industrial cases, refrigerators are used to cool the liquid again, and excess heat is simply turned into condensation. This method of cooling is more efficient than air cooling because it cools only the components that produce heat, rather than a large part of the computer.

    Hopefully this background information will help to understand more about the incredibly complicated process of choosing components that work together and are compatible, as well as figuring out the long and drawn-out process of putting them all together.

    This project began in September with testing an older computer and figuring out information about its power usage and the efficiency both under load and when idling. This desktop began drawing 78 watts with a power factor of 0.96 and used 90 volt amps. If I used Chrome or ran slightly strenuous processes the power usage jumped up to 120 watts. Meanwhile, while using benchmarking programs that stress computers to the max, this PC used 237 watts, but became more efficient in its power usage with a power factor of 0.98. The volt-amps while running the Kombustor (MSI) software the power usage jumps to 248 VA. If I ran Prime95 (Great Internet Mersenne Prime Search) as well, a common CPU stress testing utility to test overclocks by calculating Pi, my old system’s power usage jumped to 281 watts, using 284 VA and with a power factor of 0.99. This means that as a system increases in load, the power factor increases as well, an interesting realization: maybe computers more efficiently use power at full processor and graphics load.

    This system, when compared to my old machine was low end, with an Intel i5 650 from around 2010, with an NVidia GTX 460 GPU from 2010 as well. If you were to compare my old computer to a typical machine that most people would own, it would run laps around them, but a high end system nowadays like the one that I have built is absolutely insane in comparison. The pictures below show both benchmarking programs that I used, as well as a picture of the setup that I had at this time, with the KillAWatt monitoring power usage.

 September also marked the beginning of my research into what components to purchase as well as budgeting in order to figure out what I could and couldn’t afford. I mostly focused on monitors at this time, and I began looking at higher resolution monitors that passed the 1080p barrier, as I wanted lots of uninterrupted screen space for doing homework and multitasking. I decided that I wanted a 2560x1440 pixel display that was IPS, which means In-Plane-Switching, a display type that results in a much higher quality picture. At this time I was thinking about the Dell U2713HM-IPS-LED or the Qnix Q2710, and I later decided on the QX2710 because of the price, even though I was guaranteed no defects with the Dell monitor.

    Like September, much of October was taken up with research about what components I would buy as well as how much money I was going to be spending on the new machine. I realized more about the components that I wanted to purchase, although I began to hesitate on whether or not I wanted two GTX 970s or a GTX980. I also researched processors and reached a slight dilemma. I needed to decide on budget, but if I decided that $2500 ($1550 of which is from selling my old PC) was my budget I wanted to get really high end components. I was not sure if the benefits of DDR4 through Intel’s new Haswell-E processors was worth it, as I would have had to pay $589 for the processor with enough PCI-E lanes to support 3 GPUs in SLI (a technology that distributes graphics load between GPUs), which I actually later realized was incorrect (Crijns) as the lower end processors sported support for 3-way SLI as well.

    I began making decisions in November, and I ordered the Qnix monitor on November 10th. This decision was based on a lot of factors, but mainly that it could be overclocked, a major goal of this project. That means higher refresh rates than the standard 60Hz which allows a less blurry view of videogames and benchmarks. The human eye can notice the difference between 60 and 90Hz, and I was excited to see what that means for day-to-day usage. When I eventually got the monitor, I was incredibly happy. It came with no dead pixels and no light bleeding, as well as some of the best color calibration I have ever seen in a display as you can see from the image below. I immediately overclocked it and I was even happier to learn that it could run at 120Hz, not just 96, meaning twice the refresh rate that it shipped with (lawson67). 

Two images showing the contrast differences between the displays.

    I decided on the i7 5820k for my processor of choice. This choice relied on a few key things, but primarily the upgrade path. DDR3 memory was released in 2008 (Murray), and since then it has been the standard type of memory for every motherboard. Only in the last 3 months has a new standard called DDR4 come out, and this standard is beginning to proliferate, starting with the highest-end devices and working its way down. Since I will be an early adopter of the platform, I have decided to take a leap and purchase a processor that supports this new standard. This, combined with the fact that having six physical cores is awesome, made me decide to purchase this processor.

    Deciding on a motherboard was quite a bit trickier. I decided whether or not I want the size of my computer to be tiny or just normal, even though I could design a system that would fit in a tiny case, size is less important than speed. I believe that this decision helped me to build the most stable and reliable system I could. I initially decided on the Asus x-99a, but ended up spending $50 more to buy the x99-pro, simply because it had a covering that would make the computer much more aesthetically pleasing. I chose Asus because they historically have the most reliable motherboards with the best features. Asus motherboards also have a special socket for the processor called the OC socket, which has about 4x the pins of a typical x99 motherboard to allow for greater power transfer. I did not know if this would help me with real-world applications, but even if it did not I would still have been happy purchasing this motherboard.

    Choosing the brand and speed of DDR4 RAM was a difficult decision, mostly because I had to make quite a few choices in the process. I initially had to decide if the upgrade to DDR4 was worth it, and, even though the difference was small, it was enough to justify buying into a faster platform. Secondly I had to decide if water-cooling my RAM was an option, and I decided that it would be better to simply rely on the heat spreaders built into the RAM as the low voltage and high density of new RAM means that it runs very cool. Another decision that I had to make was the speed. DDR4 memory is sold in a few speeds from stock (2133 MHz) to extremely overclocked (3333 MHz). I knew immediately that I did not want extreme speeds, because the pricing on factory overclocked sticks that ran that fast is insane, at least in the thousands. This meant that I had to choose a speed that I thought was worth it for my applications, so I settled on above 2133 but below 3000. My choice put me right in the middle of that spectrum, with 2400 MHz speed. Again, I changed my mind later because of price, but Corsair Vengeance LPX is what I decided on initially. I ended up purchasing the Crucial Ballistix Sport, because it had better timings, meaning less initial access latency, as well as a much better price, almost $100 cheaper.

    The graphics card decision was a lot easier for me to make. I knew that I would be buying a graphics card from NVidia, even before the new generation of 900 series GPUs was released. This was because of a few key reasons, but primarily because I have purchased their products before and I am very happy with the results. I also knew that I could either choose the GTX 980 or the GTX 970, and this seemed like a hard decision until I realized that I could risk 3 way SLI with 970s, something that has not been very reliable in the past, or stick with what I know and use two GTX 980s. Once I decided that I wanted the GTX 980, the hard decision was figuring out specifically what brand I wanted. NVidia distributes its chips to different manufacturers, who then put it in their video cards and overclock them as well as install wicked heat-sinks in order to make money. I decided to look into water blocks for GTX 980s, and I found that I could buy a custom water block for ASUS Strix brand GPUs, with a laser engraved logo and much more metal than the generic water block. This settled my GPU and GPU water block decision in one fell swoop, and I purchased both the water block and two Strix 980s.

    The case was easy, I knew I wanted enough space to hold everything and still have room to move around, and I also wanted to have something sleek and with sharp edges to complement the parts that I was getting. This eliminated a lot of choices, and after careful deliberation I decided to get a Corsair 750D, a case with a lot of features and dust filters that meant a much cleaner intake and output of air, so I would have to clean it less. This case also weighs much less than the case I used for my first computer, a thirty-five pound beast called the HAF-X. While that case was made of thicker metal, the Corsair case is much lighter at only twenty-one pounds.

    In terms of water-cooling components, the choices were much more difficult and I relied heavily on forums and actual real world experiences to decide on the parts. The EK Supremacy EVO is a CPU water block that beats every other water block in almost every performance test, I decided to purchase it (Moonmanovich). I paid a bit more for a water block made of nickel coated copper entirely rather than a copper and nickel base with a plastic top, simply because I figured it would go well with the GPUs.

    When it came to tubing as well as fittings, the choices were more aesthetically driven than anything. I knew I wanted to buy hard tubing, rather than the typical flexible tubing. This is because I wanted to have sharp edges and ninety degree bends that are only possible with solid tubing. In terms of material, I realized quickly that Acrylic tube, the typical tubing used for solid computer tubing applications, was too breakable and easy to shatter (Mods). I found PETG tubing, a type of plastic that is much more durable and even easier to manipulate, and I also figured out that it was an up-and-coming product in the PC water-cooling industry. I also wanted my computer to have a white and black color scheme, and after reading about how dyes can stain your tubes (rubix_1011), I decided to get white tubes rather than clear with dye.

    This made other decisions about the fittings I would use very easy, and I bought Primochill Revolver fittings because they were the newest and most popular on the water-cooling scene. I also thought that they looked nice, and I bought a pack of ten which was the perfect amount for my entire system.

    The pump was a bit more difficult, especially since I did not know anything about what was important in terms of speed, PSI, overall head, or other values that are advertised with pumps. Using a reference I found online (FrozenCPU), I realized that there were two main variants for pumps, the D5 and the DC12, and most others were simply rebranded versions of these two. I also knew that loudness was a concern, so I wanted to have the quietest and most efficient operation possible, so I decided to use a PWM or Pulse Width Modulation supporting pump. This means that the computer monitors the speeds and is able to control the rotation of the pump in order to reduce noise and strain that the pump goes through. This led me to choose the MCP655-PWM, a pump that is not very common but is based on an extremely common variant of pump, the MCP655. This pump has everything I’ve looked for, including PWM and extreme reliability, as well as power. I also needed a pump housing, a plastic enclosure that holds the pump and keeps it contained, while also making sure that it has input and output ports to allow for the attachment of tubing. This was easy to find, as I found a pump top that worked for my pump, and also worked with a reservoir attachment. The reservoir holds water and allows air bleeding out of the system, an important factor in keeping the computer as noise-free as possible. After selecting all of these components, there was one last thing that I needed to purchase in order to build a water-cooling loop, and that was a radiator…or two.

    I ended up purchasing one 280mm radiator, which holds two or four 140mm fans, and one 360mm radiator, which holds three or six 120mm fans. This was an easy decision to make, as there’s not a lot of information on radiators other than simply that you buy ones that will fit in your case and that have the thickness that you need to dissipate the total wattage of your components. TDP is a number that manufacturers use to tell the maximum wattage that something will use, and typically the number is the max amount of power a component uses. Water-cooling quiet computers typically requires enough radiator space to dissipate 1.2 watts per mm of radiator, so one 120mm radiator dissipates 144 watts of power. With my PC, I have 2 GTX 980s (165w each) and one 5820k (140w) which totals up to 470 watts of total dissipation or a minimum of 390mm of radiators at the minimum.

    Since I decided to buy one 360mm radiator and one 280mm radiator, I will have a total of 640mm of radiator space or 768w of heat dissipation capacity. This is far above the minimum, and it allows my computer to run silently even when playing videogames. I also found very high CFM or Cubic Feet/Minute fans, which means fast and powerful airflow through the radiators even at slow speeds.

    The final components are the hard drives, two of which are for data storage and are simple spinning disks. The primary hard drive on the computer is a solid state drive, which means that instead of having one of these spinning disks, it has a circuit board with chips that store data, similar to a flash drive but astronomically faster. This difference in storage types results in much slower speeds for the secondary hard drives, both of which have a maximum read speed of 126 MB/s according to Western Digital’s specs sheet for the 1TB version of the Caviar Black Hard Drive (Western Digital). This, contrasted with the SSD, a Samsung 850 Evo, with speeds listed as 540 MB/s, is a huge difference (Samsung). Because the secondary storage on my PC will be primarily used for storing media like movies and games, the speed difference will be less noticeable, but I wanted to lessen the gap farther than it already was. That’s where RAID comes in.

    RAID is a technology that is actually very old, and is an acronym that stands for Redundant Array of Independent Disks. This technology was invented for a few reasons: to offer an inexpensive way to have fast storage on servers, because at the time in 1987 there were few inexpensive and fast hard disks, and also to allow redundancy for data backups and to make storage less likely to fail. RAID is arranged into levels, from 0 to 5, with each one meaning something different.

    I had another hard drive lying around that was the same type as the secondary disk I was planning on using, and for this reason I decided that RAID would be perfect for me. I ran a benchmark both before and after instituting a RAID 0 array, and I realized something amazing, I got literally twice the speeds from running two drives in parallel compared to each by themselves, as shown by the before and after benchmarks below, taken with ATTO Disk Benchmark (ATTO). The red bar is write and the green bar is read speeds, in megabytes per second.

    When the time finally came to build my computer I already knew what they layout of tubing and parts would be due to many hours planning and diagramming inside of the case. Using a handy Microsoft Paint diagram made in 5 minutes during Pru’s class (my art teacher) as my guide, I began to start my construction.

    I began by flushing the radiators, a task that was relatively easy but also painful simply because holding a gallon of distilled in one hand and a big hunk of copper in the other isn’t a small feat.


    After rinsing the radiators it was on to removing the GPUs and HDD cages, which was not too hard, and the computer quickly went from this: 

To this:

    Once basically everything except for the motherboard and processor were removed, I began the process of taking apart the GTX 980 STRIX graphics cards.

    In the pictures below you can see me placing thermal pads on the VRMs of the GPU. VRM stands for Voltage Regulation Module, and it regulates the power that the GPU receives. After placing these pads on the graphics cards, more pads were applied to other parts of the card, and the shiny thing in the middle (the actual graphics chip) received an x pattern of thermal paste in order to be cooled properly. The wire is the fan cable wire from the previous heat sinks that were on the GPUs. This was difficult to remove, but I found a flat-head screwdriver did the trick.

    The water block was easy to install, just a few screws that bolted it to the card, and then the back plate bolted down on top of it. The next step, putting together the FC Terminal, a piece that links two GPUs was pretty straightforward as well. It required the removal of a part that allowed two tubes to connect to each GPU, and screwed right on. This was an easy process, and once it was finished the two GPUs became a solid block that could not be wiggled easily at all. I slotted this into a PCI bus, and began the next step of mounting the CPU water block.


    The EK Supremacy EVO CPU Water block is a beautiful thing, and it is constructed of solid copper and plated with nickel. An interesting feature of this CPU water block is its ability to be opened in order to change parts, allowing customized performance based on the size and shape of the CPU being cooled. The picture below shows the micro-channels in the top of the block, which force water through themselves at high speeds to pull the heat off of the processor at very high speed. The lower part of the block, which actually faces upwards, has the jet-plate, a metal plate that can be switched with other plates that are thicker or thinner depending on the application. Installing this part was very easy, and simply required some thermal paste to be placed on the CPU before being screwed down in an X pattern to provide even coverage.

  The next step of the build process was to install the radiators, and they went in without a hitch, one with four fans on the front, and one with only three on the top. The picture to the right shows the 360mm radiator with fans attached, although I ended up turning this radiator around for aesthetic purposes. I did end up having a few missing screws, which turned out to not be a problem as there were many extra screws included in both radiator boxes.

The top radiator with fans installed

    The final part of the build process was to install the tubing, and I managed to finish two tubes on the first day. Bending the tubes was incredibly painful, as I had to heat the PETG to the right temperature to bend, but not melt, and then bend quickly and precisely around a template that I made. I ended up messing this up enough that I could not finish the build, and I had to order more tubing. Once that arrived I made (decently) quick work (which means about 6 hours for 3 tubes). I also filled up the computer for the first boot with water-cooling. This was a very frightening experience as I had no idea whether there would be leaks or if the computer would even work. I was also worried that I had messed up the mounting on one of the GPUs and would be unable to run my computer until I opened the graphics card and redid the thermal paste. Luckily, this was not the case and the computer ran amazingly well, but without fan control.

The pump, with a piece of tubing that took three hours to bend

    After figuring out that the cables I had received needed to be modified to allow for control over the fans, I immediately proceeded to snip the wrong ones and had to return them. A few days later I received new ones that would work, and installed them, making the computer quiet.

    Towards the end of the project I decided that instead of using the LED controller that was included with the light strip that I installed in the computer I would switch to a microcontroller (a little computer) to allow for light control. I received an Arduino and a shield for it that will allow me to interface with the led strips in the computer. This will allow me to run programs on my computer to interface with the lights, allowing much finer control than I had before.

    When I look back at this year and what I have accomplished I feel as if I have done a very good job of managing my time, especially considering all of the obstacles I have been faced with. There are a few times that I wish I had documented more in my journal, but other than that I feel as if I stayed very well within my initial schedule, aside from being unable to write a simple webpage. I felt as if that goal was a stretch from the beginning, so to me it is not a total loss. I also have begun to explore coding because of this project, as I am learning to make GUIs and use real programming languages for a practical use, even if that use is to control lights on this computer. I am incredibly excited to see where I can go from here, as I think there are many paths that this project will open for me.

    Overall, I am happy with myself as an independent learner, and while there have been a few slow-downs throughout the duration of the project, my pace has been consistent due to my interest in the topic and the fact that I am used to independent classes through multiple independent projects through Tandem and robotics. This project has been an incredibly fun experience, and I plan on expanding it as the year progresses.