In the early 90's, my Bride considered the pile of computers around my desk. "Why not just get one big one?" Fast forward a couple weeks where I was shopping at the Lockheed outlet store - and found a Sun 3/280s and 19" wide, 8' tall rack for $25. She was not amused to have a 'big one' the size of a fridge in our tiny apartment.
That old server has gone through many, many incarnations of hardware.
My wife came into our relationship with not just one, not just two, but THREE NeXTcubes. I negotiated her down to one actually-working one and one decorative one, and gave away the third.
I had a proliant 7000 (quad pentium II xeon) for a while, it was 16U and as loud as three vacuum cleaners. The spouse was very happy when that disappeared.
About 20 years ago I moved into a tiny studio with my first serious girlfriend. She didn't appreciate the beautiful sounds of my 1980's IBM Model M keyboard I'd been using since I was 5. Probably more so since I usually worked late into the night. Given another gripe was the significant amount of floor/deskspace used by my two 19" CRT monitors, I compromised, and replaced the Model M with some $20 logitech thing, throwing the M out in the trash.
Boeing does a lot of surplusing through EHLI Auctions [1] these days. Prices aren't always great, and I'm not sure I've seen machine tooling or huge wrenches, but big batches of office and workshop stuff marked boeing surplus from time to time.
Similarly, GE does it through Link Warehouse (https://www.linkwarehouse.com/), but it is geared towards industrial equipment. I haven't noticed any office stuff.
Man I will say it ! Before Apple made beautiful computers... Sun made very pretty machines (for that era !) I always wanted one of those purple beasts :D
A few years ago I bought a switch that could rack mount and ended up looking at getting a rack to put most of my stuff in. I quickly abandoned that idea when I saw that rack mount makes things cost significantly more. I remember that the UPS pricing was especially nasty. Felt like they were just charging what they could get away with considering that it's really only businesses buying rack mount equipment.
Though if I ever buy I house I might end up going Linus level crazy and put a rack in the basement or a utlity room with fiber optic cabling back to my desk for peripherals to attach.
Absolutely. Many small businesses are powered by two-generations-ago rack mounted appliances and blades.
It's often the most cost effective way to get redundant and A+A power supplies in servers that you need to protect beyond simple UPS methods.
Source: Loving the 2U and 4U blades with 2017-2020 era EPYC and Xeon processors with dirt-cheap Tesla P40 24GB VRAM GPUs in them for inference and all sorts of compute loads.
There’s no such thing as a 2u blade, blade servers are a different thing than #u “pizza boxes”. Generally speaking they’re chassis mounted servers with shared power and I/O through mezzanine cards/backplanes(yes there are blades without shared power and/or I/O, but they’re the exception).
Blade enclosures are different than blade servers. Small servers like that are also very unlikely to be loaded up with high-watt GPUs for inference or training workloads.
The number of Us is about height, not width or mounting hardware - and all the blades I've worked with were 2U high for a "9-inch rack" that just happened to be mounted inside a larger rack.
I mean, it’s a free country you can do whatever you want, but you should be aware that you are not using that term the way anyone else uses it.
Rack Us absolutely refer to a form factor not just a z-height. It would make no sense to refer to something in # of Us unless you either want to then mount it in a rack, or be deliberately annoying.
> but you should be aware that you are not using that term the way anyone else uses it.
Most people (including, typically, the companies actually manufacturing and selling blades and enclosures) would also in this context reasonably infer "2U or 4U blade" to be referring to a (partially-or-fully-)filled enclosure, rather than pedantically assume that someone (who is implied to own and use such a server) doesn't know the difference between a blade and a pizza box - but given that we've already deviated from using terms the way anyone else uses them, what's one more deviation, right? :)
Of course they wouldn’t, blades are blades and enclosures are enclosures. It’s really not that difficult of a concept.
The entire point of blades is that they’re modular and a chassis can be filled with many different kinds of blades. Calling an enclosure a blade would be like calling a tire a car.
Source: I’ve specd out and ordered literally thousands of blade servers and hundreds of enclosures from every major vendor.
Edit: an even better simile would have been people who call a whole desktop computer a cpu.
I've had a bunch of APC SmartUPS 1500 in both tower and rack formats.
I found the batteries in the rack version, where batteries lay on their sides, reach end of life much sooner than the tower version, where batteries are upright.
It's likely poorer heat dissipation/ventilation in the rack version, but may have to do with battery orientation. The rack version also seems to suffer corrosion damage from battery off-gasing if batteries reach critical failure and overheat. I haven't seen the same damage on the tower units. Again some combination of orientation and ventilation.
The batteries are sealed lead acid AGMs. Where orientation isn't a concern because the electrolyte is gelified in fiberglass. Also, they shouldn't be off-gassing at all. That's the sealed part. I've had SLA batteries puff up. And if you are getting corrosion in and around the AP chassis you may have a leak.
Typically, what determines the life of these batteries is # of cycles and # of deep discharges. The more extreme the environment the shorter the life time. I have network equipment sit outdoors in a cabinet. The batteries typically last 24-36 months instead of the 48-60 months I see from indoor UPSes.
I recently had a Supermicro board die because it became infested with moths out in the garage, it was disgusting and the board was borked even after extensive cleaning.
How do you keep wildlife at bay? (insects and rodents are attracted to heat and shelter).
Maybe some kind of solid metal cabinet enclosure like the public utility boxes along some roadside thoroughfares?
In a garage environment, using mesh everywhere and ensuring there are zero openings like an empty PCIe slot is the best solution.
Outdoors you need special enclosures typically made for industrial computers; and many companies build special fanless enclosures that dump heat directly through the metal enclosure (it is basically the heat sink).
Others have fan designs that have their intake and exhaust through filters on the bottom.
The circuit board for a UPS and the board for a motherboard aren't really the same. One is designed for high voltage/high current and conformal coated in epoxy. The later has much tighter tolerances and is optimized for low voltages/high frequencies. And it's probably not conformal coated.
orientation isn't supposed to be a concerned with SLA batteries, but it's probably nicer for them to be upright than not... when the seals do fail, it's better to have the electrolyte seep out and stay on top than seep out and drip onto something else.
I could believe the consumer models are actually built to a higher standard. A rack mount unit is likely going to be climate controlled, minimal vibration, actively monitored, etc. The consumer units will be subjected to all manners of hell: pet hair, zero ventilation, installed and forgotten. Nobody wants the UPS to burn down the house.
In my experience, server grade stuff tends to be significantly more reliable (as long as it doesn't become infested with insects or something odd like that :p).
My consumer machines are generally flaky as hell in comparison.. maybe partly due to windows and gfx card drivers? Ymmv.
>I saw that rack mount makes things cost significantly more.
there's a logical reason for this though. it's a pretty safe assumption that you're in a larger facility if you're rack mounting. a lot of rack mount chasis come with features like dual PSUs and other redundant features. redundancy does cost.
>put a rack in the basement or a utlity room
They make enclosures[0] that are smaller than a full rack, but allow for rack mounting equipment. you can then just shove this enclosure on a shelf or wherever. much more convenient than a full on server rack.
You basically can't get a "PC" into any of the cheap small wall mount solutions I've seen on Amazon or elsewhere - they are almost always around half or less the depth of a "real" server chassis. A 1U/2U rack mount PC needs a fair amount of depth to fit, especially one with a fullsize GPU etc. You can do something like a Mac Mini/Studio or Intel Nuc sitting on a rack shelf, or some racked SBCs like the Pi, but not so much custom PCs.
These cheap wall mount units including the one you linked are great for rack mount network or AV equipment, not so much to rack your PC build like Jeff did here.
The width for server racks is standardized - the depth however varies enormously from rack to rack, with wall units usually being pretty shallow. I'm not sure most people would even want something as deep as a say a typical 2nd hand 1U/2U blade server hanging on the wall - they often need 29 inches or more.
Is this number including an extra ~2-4 inches for the power cable clearance at the rear? In your new case and most others, power and cooling at the rear requires a few inches of clearance in addition the depth of the case itself, otherwise the power cable cant physically fit. I've made this mistake before, with a wall mount rack those extra few inches matter in a way it doesn't for a free standing one, where cables can extend beyond rear edge of the rack without meeting a wall.
EDIT: The box Jeff has used is one of the few I've seen that might actually work well in a wall rack, if you can live with an mATX form factor motherboard in your build, at ~9 inches, which leaves enough room for power and air at rear.
If you want a Linus Tech Tips style gaming rig or mid/fullsize tower build in a rack, I still think you might need to go free-standing most of the time.
In most cases I do agree the extra price is there for a reason. It was specifically the UPSes that seemed absurd.
And the reason I would put the equipment in another room would be to keep the heat away from the living space. A basement would be especially good for that since it's natually a cooler area.
Still good to know about the smaller racks though. My bank account is much healthier than the last time I looked into rack mounting so I might actually get one that size for my apartment now. It would really neaten things up.
I bought one (probably not the exact one i linked) for mounting a UPS and a home security camera DVR unit (I don't buy into the security as a service from all of those options), but it never got installed. So the DVR/UPS just sit stacked on a shelf unprotected, but that's what the missus wanted. So now it's that thing that just sits there taking up space that is hard to sell at a garage sale, and I can't be bothered to create an ebay account to sell it.
Also, even if you don't have rack mount equipment, they make the shelves you can attach to the rack mounts and set other gear on them. So yeah, you can really clean some stuff up with these.
> I quickly abandoned that idea when I saw that rack mount makes things cost significantly more.
Second hand (but perfectly usable) parts. Expect more second hand enterprise items to get to the market if the economy doesn't start to boom soon. Second hand enterprise gear is usually perfectly fine in terms of longevity. Mind the noise.
Get a 3d printer, print some rack adapters yourself. There's a surprising array of adapters for all sorts of devices and they work just fine for stuff that's not too heavy(like most home networking gear).
If the rack itself is too expensive, you can do an Ikea Lack rack. Full size racks are easy to find (relatively, location dependent). Haven't had much luck with network racks (which was the largest my wife was comfortable with). So I bought the rack new. Still managed to fit the whole networking gear stack(none of which is racked atm, they are sitting on a rack shelf) and a ITX server (Node202, which I used as my workstation in the past, fit perfectly).
You don't really want fiber optic unless you are connecting two different buildings that do not share ground. Inside the same house, ethernet works just fine. Fiber is finnicky, most home contractors don't know how to deal with it, and transceivers are expensive. More expensive than 10G usually.
In my personal computing experience, the main value of a rack is still the density, as it is in the datacenter. Moving one or two ATX PCs, a UPS, and switch into a half height rack isn't that much of a win (compared to the floor and/or basic shelving). But double that much into a full height rack is.
You don't need strictly rackmount hardware either. I just have a pair of non-rack SUA1500's sitting next to one another on the solid bottom built into the rack (not even a movable shelf). The battery type is much more common than the rackmount versions.
Case wise I'd just recommend 4U all the way for full size cards, coolers, and fans. Smaller fans make more noise.
Also just spend the money on cases with hot swap disk bays for any machine you plan to have more than a few drives. You'll end up wanting them sooner or later, and it's nicer to run the SATA cabling once with how cramped rack cases can get in places.
A nice compromise is to buy a tower UPS - the ones which are deeper but not as high - and then place them on a cantilever shelf on the rack. Mine occupies 3U of rack space and I can even place my NAS on the shelf in front of the UPS.
On the other hand, surplus rack mount servers are very cheap on eBay. You can certainly stick non-rack UPSes into a rack. I did this for about a year (two APCs sitting at the bottom of the rack) before I got a great deal on some surplus rackmount UPSes.
Yeah; buying new is extremely expensive. Many cities have an e-waste recycler from whom you can get great deals (otherwise browse Craigslist and Marketplace, there are good deals (sometimes free) on older equipment.
For UPSes, a reputable brand (like APC) will still function fine but might need new batteries, which are a lot cheaper than a whole UPS.
That's a great point about a second-hand UPS. My mind has been tainted by all the non-removable batteries inside consumer products so I completely forgot that a worn-out battery on an old UPS is easy to replace.
I haven't shopped rackmount UPS before but for lower end consumer APC UPS the official batteries from Amazon or the APC website are often the same price or more expensive than a whole new APC UPS from a slickdeals-spotted sale.
I have a music studio here in my home, for my own personal use. It's fairly common for high-end studio hardware such as signal processors, and synthesisers to be made in rack-mountable format. At some point I figured I could save some desk space by building a studio PC in a 2U rack mounted case. This meant I could keep it directly together with the rack-mounted power conditioner, audio, and MIDI interfaces that it connects to. I figured this was hitting a studio ergonomics home run, as the heart of my studio now fits in 5U of rack space. As the years have gone on, I've found it less and less convenient to build my studio around a bulky 19" rack. These days I'm looking at ways to shrink this setup further. Less and less mid-range hardware is rack mounted these days too. I'm not about to get rid of my setup anytime soon, but I won't be building another one like it when it comes time to upgrade.
In the home studio world this kind of setup is much more common. I think a lot of people are attracted to the aesthetic of rack-mounted studio gear. I know I definitely am. I personally don't find this setup as ergonomic as desktop modules though. I think manufacturers are coming to the same conclusion.
I used to have a rackmounted pc, and audio interface, personal server, etc all on one rack. Over time, things shrunk, and it made more sense to put it all in a well ventilated spot in the garage with a 10gbps link to my office.
I could have used wifi, the bandwidth would have been fine, but where's the fun in that ;)
Going another step, I'd love to have a Thunderbolt to fiber adapter without having everything baked into a single cable. These products exist, but cost $5000 for each end
Yikes. Probably such a niche product that they slap a pretty beefy FPGA and some off the shelf NICs together which can't be good for the BOM cost haha.
On a similar note I recently worked with an adapter that carries USB 2.0 over a cat5 cable. It doesn't even do that good of a job -- jitter is pretty high, which makes it useless for our use case of controlling a laser cutter live from a host computer -- but guess how much it costs? $736! https://www.blackbox.com/en-ca/store/product/detail/usb-20-e...
I have a coworker whose entire family PCs are rack mounted in the garage. He has HDMI and USB extensions through the house, so the only thing on desks is a monitor and powered USB hub. Perfectly silent. It seems very comfy.
But that became kind of limiting for my preferences. For a while it wasnt really practical, unless you could accept this pretty low bandwidth stream.
But now fiber-optic display cables are readily available. My first purchase was a 100ft/30m 32Gbps DisplayPort cable, for $55! To be honest, I couldnt believe it worked; I assumed it was a scam.
It's indeed excessively long... too long! I can make it out my room & up to the roof & compute from there with less than half the cable. It's way harder to unroll & reroll than any cable I've ever had before, but also way longer. I have since gone back and bought shorter cables.
Thankfully active extension cables are pretty good & tend to just work. Newer models have an inline 1-port usb-hub every 25ft/7.5 (used to be every 5m), and then a wall-wart at the end to power the hubs (to honest that feels like it should be unnecessary; a dc-dc booster mid-span & bus power should be fine, albeit leaving nearly no power available at the end).
Also beware, there's a 7 tier limit from root-hub to device, so 5 hubs maximum * 7.5m, if you only plug in a single thing (a wireless keyboard/mouse dongle).
If you’re only looking to do low-bandwidth (keyboard, mouse, audio, maybe a webcam or game controller) at the display, you can get inexpensive USB-over-CAT5 adapters. The one I have terminates into a 4-port powered hub, and latency is low enough I can’t detect it in normal use.
Latency would be super interesting to test for both. I tend to think 4-5 usb hubs are still under 1ms latency. I'd expect the DisplayPort digital->optical->digital path takes less than 1ms too. But I'm guessing, for sure.
> I have a coworker whose entire family PCs are rack mounted in the garage.
SunRays (https://en.wikipedia.org/wiki/Sun_Ray for those too young) were really nice. When they came out I realized why have a noisy workstation under my desk when I can have a massively larger (far noisier) server in the rack and a totally silent SunRay on my desk! That was my setup for a long time.
No longer using the SunRay (sigh) but conceptually not too far from it. I have a macmini on my desk but I do very little compute on it, it's there basically just to drive the monitor. All the machines are on a rack in the garage.
I spent a lot of time playing in bands and doing the home recording thing 10-15 years ago. Had a bunch of rack mount gear and dreamt of building out a mobile recording rig using a DJ case[1] like my dad had (for DJing weddings).
Eventually found someone selling an old 4U (which is about as tall as a standard desktop is wide) ATX server case, locking front panel and all, on craigslist for cheap.
It was super heavy (as was all of the other audio equipment), but having all of my stuff in one case (and only needing to set up a display, keyboard, and mouse) was pretty great. Never had much issue with heat/noise, but wasn't overly concerned at the time either.
One interesting thing about going the rack route is that most of the hardware optimizes for considerations a data center/large corporation might have.
That can be nice in a lot of ways (hardware is fairly robust, layouts are usually friendly for quick access/maintenance, overall density of machines is pretty good).
But it means almost no one is paying attention to how much noise these things make, because for most of the target audience that's WAY down the list of things they care about.
I have a couple of - not elderly but not new - 1u and 2u machines, and they are LOUD AS FUCK. I used to run my server farm in my office when it was just spare personal computers (under the table I use for hobby projects), but I quickly ended up moving it down the basement when running the rack machines.
They're just too loud for comfort on a semi-regular basis.
Yep! I really got into a lot of trouble with my family for my HP G8 boxes. It was like the sonic equivalent of that episode of Seinfeld, where the Kenny Rogers Roaster sign was outside of Kramer’s apartment.
There are fans mods and firmware updates, but in the end it’s probably the best use of one’s time to buy G9+ as they are far quieter. /r/homelab has some great data about these issues.
Also watch out for network gear. A 48 port POE brocade switch was as loud if not louder than the servers.
Pro-tip: if you can spare more rack units, get a bigger server case. 1U is usually far louder than 2+
There are a few manufacturers that make silent racks[1]. They're hard to come by, and are much more expensive than regular racks, but there's a small market for them.
Otherwise, manual soundproofing is an option, but then you have heat to deal with.
Most servers aren't very loud unless you're stressing the hardware, and fan settings can usually be tweaked, or fans replaced with silent alternatives.
Still, you wouldn't want this kind of equipment humming next to you in the same room, so it's best to dedicate a small server closet, room, or garage, if you have the space.
That said, a custom server build in a rack mounted enclosure is not the same as using enterprise servers. My gaming PC is in a 4U enclosure and is as quiet as any tower PC.
I keep thinking that Oxide computers is missing a trick and that someone should be building minicomputers ported to plug coolant lines straight into the rack. Either glycol lines or minisplit lines.
The hot row/cold row stuff makes sense when you've got a data center, but if computers are eating the world then more and more people will need 80u of rackspace total, maybe at 3 branch offices.
Yes! I went down the rackmount rathole about 20 years ago, and for 10 years all my computing equipment was installed in a 44u cabinet.
The problem is that rack mount plus low noise equals very high cost, if it is even achievable. Eventually I could not stand the noise anymore, and got rid of the rack. So did every single one of my colleagues who tried rackmount.
As others have mentioned rackmount equipment is also expesive, often ridiculously so. I am very happy to leave racks in the data center where they belong.
When craigslist was brand spanking new I found a ruggedized 1/4 height rack that looked like it was surplus military.
My second stroke of luck was that the hall closet in my apartment had a corner in it that would exactly fit this rack, telephone wiring junction box was immediately above the rack, and all of the house wiring was straight Cat5 runs from there, so a few new outlet covers and some cable rejigging and we had internet everywhere (but only 2 pair where the phone plugged in), terminated at the rack, which was partly muffled by winter coats. You can't push a lot of BTUs this way, but you can do a few.
My solution before this was to hide a tower PC behind the couch. For the right kind of couch and the right vent holes, you can eat an awful lot of ambient noise by using major furniture. There's lots of space out there for things like this.
I also tried at one point to build an air filter that would fit under a bed (less noise, no floor space, fewer dust bunnies.
Get a 3U or 4U case and replace all of the fans that come with it with something designed for desktops. A full size tower is basically a 4U case on its side, so there is identical room for fans &c.
I have a 3U setup where the loudest thing in it is the GPU fan on the 1050Ti.
> One important thing that server racks don't address is noise.
They relocate the noise to where you aren't, so they do address it.
As I mentioned in another comment, my desktop used to be a big Sun server (forgetting model right now) loud as a jet engine. But it was in the rack far from anyone and my desktop was a totally silent SunRay.
I put mine in the basement. Handles most of the noise unless things get really exciting.
I have a friend who keeps his server rack in his unheated detached garage. Works great unless it gets really cold and all the servers shut down due to low input air temperature. Which I never really knew was a thing.
One of my regrets is that my house doesn't have a basement with a furnace in it. For years I've had a design in my head to build a data closet next to the furnace, so the cold air return can suck air straight off of the servers in winter, and the closet sucks in air from a duct along the foundation.
And now that you can get heat pump water heaters, that strategy would work a bit better still.
This is what turned me off as well. I ended up getting older Dell 2011-3 dual socket workstation for cheap. Still heaps of power, room for upgrades but whisper quiet when idle and under load.
That's a nice looking case! Seems like it would be especially good for a router or networking box. In the course of my build I found out about PlinkUSA and Sliger, who both have a few decent 2U options for those of us who don't want screaming server fans.
fiber optic DP cables are maybe $70 or $80 and work pretty well in my experience.
on the pricier side if you have something with TB support, you can do display and USB over that to a fiber optic TB cable (about $400); this is my current setup for my racked PC in another room to my office, with a TB hub (caldigit TS4) that splits out display and USB.
Any idea if those are thunderbolt 4? Its hard to find any vendor that will state support for 4K144hz through a dock over any sort of extension, right now I am limited to "only" 4K60.
I run a fiber optic display port and can do 4K 120 from 50 feet away. Thunderbolt cables are incredibly expensive I just ran the DisplayPort, an Ethernet, and usb cable - not sure I've seen any TB4 optical cables that are long lengths yet, just the Apple one that's like 6 feet for $200. You can buy Tb3 ones for 400 or so.
I have a 50ft Monoprice SlimRun DisplayPort cable on my 4k 144hz gaming monitor and it works great. However, it's just a DisplayPort cable, not thunderbolt, direct from PC to monitor, no dock. And it cost $170. :/
the only available fiber optic thunderbolt cables I'm aware of right now are from corning, and they are thunderbolt 3. I'm running 5120x1440@120hz through it (and a variety of USB); 4k144hz would be about 35% more bandwidth. Corning lists the cable as supporting 2x4k or 1x5k (presumably both at 60hz); 5k is the same bandwidth as mine, and 2x4k is some what more but less than 4k144hz. Total bandwidth on the cable is 40g and bandwidth needed for 4k144hz is about 32g I believe. can't guarantee it'd work, but 4k120hz probably does?
Dell T7810/5810/5820s can be had second hand for practically nothing and they have fairly modern build specs. They're sized to fit in racks with some basic L shaped rails. They also make great under desk systems as they're whisper quiet.
I moved my desktop into a 4U case* about six years ago and never looked back. Then I wanted to build my own NAS and--oh look at that I already have one rackmount chassis... I guess I'll get another and some posts? I did and it was also a great call.
I went through this in the early 2000s when I was a) exposed to rack stuff in CCNA class in high school/votech and b) dot-com surplus started showing up really cheap on eBay.
Nowadays my desktops are either SFF or mid-tower CAD workstations. We've got some control computers racked with test equipment/tools but they're not really general purpose workstations.
I moved my workstation in to a 4U rack mount case. Can highly recommend it. Built a custom rack mount cabinet that looks like a traditional arcade game cabinet. Moved the workstation along with the rack mount UPS, and several other bits of computer hardware, in to the arcade cabinet. Fabricated some shallow drawers with over extension, soft-close, self-locking slide outs, and installed a separate computer for running the emulators. Lots of noise reduction padding, lots of air flow with large dimater fans. Then I made another custom rack mount case, side-by-side, again lots of noise reduction padding, for the Synology NASes, Mac Mini, laser printer, cable modem, network switch, and various bits of networking hardware.
My gaming PC lives in a rack in my basement now in a 4u case. There is a 150ft optical hdmi cable running to the third floor, and a usb over cat6 cable returning control signals. I like how it is entirely silent to me, and does not warm my room
I have a near-identical setup, and it’s absolutely spectacular. The worst part was running conduit, since the layout of my home is such that I had to run conduit up from my office, run wire across the attic, then more conduit down the opposite exterior wall to my basement.
When I built my most recent home GPU server, it seemed like 3U wasn't tall enough to fit a big GPU upright, given the PCIe power connectors on top of the card, so I went 4U.
Though I think you could get a double-wide GPU into a 2U with a PCIe riser cable/card.
For 3U/4U I like iStarUSA cases. I went for 4U for the quieter 120mm fans and so it can fit half-hight bays sideways on the front panel. Here's my video encoding workstation in an iStarUSA D-400L-7SE: https://imgur.com/a/YLjDX0e
- Full-height PCI cards fit (though some GPUs have taller than full-height coolers; if the HS/F protrudes above the mounting bracket, you'll have to measure).
- You can use larger fans, which can help a lot with airflow and/or noise. Just throw away the fans that come with the case as anything designed for a rack assumes you'll be wearing hearing protection while working around it.
- More motherboard options. If you use a power-supply designed for rack-mount, standard ATX motherboards fit fine. If you want to use a desktop power supply there are plenty of micro-ATX options.
I'm confused why you'd choose ITX and a chassis that requires half height cards. There are plenty of 2U chassis that accept ITX and allow full height cards rotated 90 degrees using a riser.
Standard tower cases can easily cost that (or more), perhaps partly driven by it increasingly only being gamers buying them (so you can charge more for RGB, funky angles, or also minimalism, etc.), but server stuff is often more expensive than a desktop counterpart would be, different market but also more specialised I suppose.
I paid almost £260 iirc for a 4U case with 16x3.5" hot swap bays, a few years ago, and that seemed like a great deal (without waiting for something used on eBay anyway). Now, I still haven't really made use of it... but that's on me.
Recently I found myself with a mini ITX motherboard and got a Ghost S1 case for it. The space saved and convenience of such a small factor is life changing. I’m never going back to ATX and big towers.
I have a meshlicious. I can fit a full ATX PSU no problem (could have more SSDs or 2.5" drives otherwise). Because it's a "sandwitch", the GPU and CPU are drawing air from opposite sides of the case. It's all mesh so airflow is great. Pretty small too. Not as small as your Ghost, but it can accommodate really large GPUs if need be. Even watercooling would be viable (I've seen some great custom loops, for the CPU even a two fan AIO would fit).
I do miss ATX because ITX boards generally do not have any expansion slots. Sure they come with Wifi, but if I decide to, say add a 2.5G(or 10G) network card, video capture or what have you, I'm out of luck.
I do not miss the case sizes. ATX are spacious, but TOO spacious, usually. Because the mainboard is large, most cases don't seem to optimize for space at all. So you fill the thing to the brim with all expansions you can think of and there's still a lot of wasted volume. Were I to go back to ATX, it would be with a non-orthodox case(open case, inside a desk, etc).
Interestingly I've gone the opposite way for a couple years - desk side cases (full towers) instead of racked equipment, part of that is I prefer 2 post racks to four.
There are so many more convenient alternatives to the traditional massive tower for consumer PCs today, yet they are all severely limited by the same
issue: modern GPU form factors. I really wish the GPU makers would get a clue and come up with a more flexible standard. I suspect just making the heat sinks modular/swappable would mostly solve the problem.
Does anyone know of short-depth ATX chassis (preferably 2U)? Short as in <13" (330mm)? I’m jealous of the ITX case here, at only 55mm over the ITX width; that should mean a <300mm ATX case should be possible, but I’ve never found anything close to that. Most everything is >14", usually 15"+, and wasted space to fit a bunch of HDDs or something.
Make one! Protocase is a great resource. Alternatively reach out to Sliger they may have something that works. They are in the process of moving (or were over the holidays), but they make great cases.
So I'm the guy who rack-mounted a bunch of game machines for https://kentonshouse.com -- and am currently nearing completion of a new house with even more machines. Here's some of the challenges I've personally encountered in this.
*Fitting the GPU*
A normal GPU, mounted in the normal way (perpendicular to the motherboard), will not fit in a 2U case.
It looks like Jeff's solution was to buy a half-height GPU. Neat! I didn't actually realize those exist. But it looks like NVidia's regular consumer-oriented GPUs don't come in this form factor; Jeff instead went with an RTX A2000, which is part of NVidia's "professional" GPU line. The problem with this is that it's way more expensive to achieve the same gaming performance.
In my previous house I solved the problem by using 3U chassis. This provides just enough space to mount the GPU perpendicular. However, it still had problems. Modern GPUs require supplementary power direct from the PSU. The socket for this usually points outward, away from the motherboard. In a 3U chassis, this socket is right up against the top plate so there is no room to plug in the power. Sometimes you can find a GPU that has the power socket pointing in a different direction, but they have become rare. I actually had to physically mod ten GTX 1060's by hand, cutting away the plastic socket housing and directly attaching each power wire to the corresponding pin. Big PITA but it worked.
In the new house I'm going with 2U, but using a riser card. So, the GPU is mounted parallel to the motherboard. This seems to work OK. The big problem is ensuring the chassis and the motherboard both have their expansion slots aligned right for this. Most 2U chassis are designed for perpendicular, half-height expansion cards. A smaller fraction support full-height parallel cards on a riser. The motherboard also has to have a PCIe slot in the correct place for the riser. This is usually the "top" slot, closest to the CPU, and I've noticed a lot of motherboards these days are opting to omit that slot to make more space around the CPU.
(I'm not too excited about it though, and would be happy for other recommendations.)
At least one motherboard that worked when I tried it was the ASUS PRIME B660-PLUS, but I haven't made a final motherborad selection yet.
A further problem which might be specific to this chasis is that the total horizontal space between the riser card and the power supply is tight. Some GPUs are excessively "tall" and won't fit. (The same problem probably applies to 3U chassis with perpendicular mounting.) I was just barely able to fit the EVGA GeForce RTX 3070 Ti 08G-P5-3785-K into the space.
*RAM orientation*
Servers orient their RAM modules parallel to the direction of air current, that is, perpendicular to the back of the chasis. For reasons I don't understand, consumer / gaming motherboards almost never do this, they orient the RAM parallel to the back. Maybe this is because desktop chassis are not usually optimized for efficient airflow anyway so it doesn't matter? And there's more space on the board if the RAM can be mounted this way? I dunno.
Anyway, in my experiments so far this has just not caused a problem. Yeah, the RAM has non-ideal orientation, but there haven't been any issues with overheating. ::shrug::
*PSU*
Servers use a different form factor of PSU. This is infuriating because all the server PSUs available retail seem to be terrible compared to desktop PSUs! They tend not to support higher-end wattage (needed for power-hungry GPUs) and they are never "modular" (you get a bundle of cords sticking out of the PSU and there's no way to remove the ones you don't need except with scissors).
There are some 2U chassis that actually allow you to mount a desktop PSU. However, in practice they are not actually compatible with any modern desktop PSU, because all such PSUs these days are designed with their intake fan on the top or bottom, not on the front/back. In a 2U chassis, that intake will be totally blocked by the top/bottom of the chassis.
I am going to have to live with crappy server PSUs I guess.
*Retail server market is awkward*
More generally, I get the general feeling that the server-oriented components which are available via normal retail channels like NewEgg are... not what big server farms actually use. Everything seems kinda crappy? Maybe I just feel this way because they don't candy-coat everything the way the consumer market does. But I have the funny feeling that people building out datacenters generally order mass quantities of chassis, PSUs, and even motherboards tailored specifically to their needs, and as a result the retail market for server components may be surprisingly small and poorly served. But that's just my suspicion based on my experiences here.
There's just not much serious market for a 'build your own' server platform. Almost anyone serious buys systems prebuilt from Dell/HP/Lenovo, which have nice integrated solutions solving all of these problems, but the elements are too tightly coupled to work well in a piecemeal environment. For example, power might be distributed entirely using a backplane that the PSUs, fan trays, and motherboard all plug into. Since the market for DIY is so small, there's no real push for standardization, which would support a modular environment like standard PCs, and there's some disincentive too, since the vendors want flexibility to do cooling etc. in the best way. So what you end up with is a combination of low-volume bespoke solutions and 'ugly hacks' to make the high-volume commodity parts work in a way they weren't really designed to.
The closest that exists out there is probably SuperMicro. They have a decent variety of chassis, backplanes, motherboards and so on that work well together in a system, and the stuff is pretty decent quality. But because they are still designed for 'serious' workloads, they will not hit the cost of an equivalent desktop PC.
SuperMicro do sell modular power supplies and you can fit their workstation power supplies to some of their server chassis. Depending on the SuperMicro chassis, the power supply may not even have any cables at all, it simply slots in, and is hot swappable, and it is the chassis that has the cabling, and again, some of those are modular. You can also make a modular power supply out of a regular supply, though it is a bit of work obviously. One of my SuperMicro (CSE-846) cases has 1280w PSUs and the other SuperMicro has a 2kW PSU, both are 80+ Platinum.
Not all server chassis obviously, but many have fans mounted at the front and pull air through the case, over the motherboard, through some plastic ducting, and out the back. Some server designs have only a heat sink on the CPU, and rely on this ducted air flow from the case fans to keep everything cool. RAM header orientation will often be due to CPU, VRM and memory controller orientation, but mostly it is about real-estate on the motherboard and physics.
Retail market for server chassis is pretty abysmal. You might consider a Cooler Master, or a SuperMicro or an ASUS case. Depends on your budget and your willingness to use a Dremel.
If you are considering a 2U rack mount case with a full-sized GPU, you might want to look at cases that specifically mount the GPU sideways, there are quite a few out there that will happily take two GPUs, though the cases are often full-depth. Just be aware that server GPUs and consumer grade GPUs have different cooling options and consumer grade cards may not fair well if the case is too close to the exhaust fan.
Instead of a riser, and fighting for a PCIe slot, you might also consider a PCIe extension cable, and then fabricate a custom bracket to hold the GPU in the desired orientation.
I should've been more specific — I do that on my gaming/secondary PC. My main Mac Studio setup is run through a 27" 4K LG display (which is also at 60 Hz, though).
For someone that reads as many logs as you/I do - I strongly suggest looking into high refresh rates!
They seem very gaming oriented, because well, they are - but being able to read things as they fly by is a huge help. I'm always the person in calls at work going: "you missed it!"
I worked at an ISP for a while where we set up a projector display and just had the log firehose on the wall at all times. I never really paid direct attention to it, and it had terrible resolution and a lowish refresh rate. Even still, the human brain is an amazing pattern matching machine and the feed gave us a lot of useful insight. Regularly someone on the team, even non-technical folks, would walk by and say "oh looks like $customer is having problems" just because their brain had divined a pattern from exposure and correlation. Sometimes we could turn those moments into automated alarms, sometimes we couldn't figure out how our brains made the determination.
To this day I still try to keep a log feed visible along with the grafanas or whatever - it is shockingly useful even if I don't ever read those logs on the display.
It's been so many years since 16:10 monitors were abandoned by manufacturers and I'm still sad about it.
Though ultrawide does lift my spirits a bit. Doesn't improve the feel of the desktop for productivity, but gaming is excellent (first person games become more immersive and in WoW my screen is so much less cluttered with less important elements pushed way out to the side).
I've always called those '1200p' monitors, to differentiate them from other similar screens. I have a pair of Dell U2415's (and U2715H's) on my desk and they're fantastic screens.
Is there science to support the vertical arrangement of server-pizza-boxes as books on a bookshelf as being passively cooler? The horizontal orientation makes sense for servicing which would be possible by a hinge mechanism like those hidden tables in the arm rest of a fancy design chair.
I'm slowly migrating some homelab equipment to a StarTech 24U but am stuck at cable management. Are there any resources on how you manage cables, particularly at the sides/rear of the rack? Are those cable management arms still a go-to?
Zip ties, velcro, channel guides, punch down harnesses, gangways, and so forth. Just be sure to add strain relief by utilizing anchor points in the case so you don't have 20lbs of cabling hanging off a few RJ-45 sockets.
I use zipties when I have a more permanent install (like this one), but will use velcro ties (you can get a huge pack on Amazon for under ten bucks) when I know I'll be swapping things out here and there.
Tangential - does anyone know of a good smallish rack that has some sound deadening but also can mount say, a c4140 that is 36in long? Would be nice to rack all my computer gear in one place, but if so, I would want to put everything in it.
No idea about sound deadening capabilities but the live sound industry also uses the 19" rack standard so there are a bunch to choose from [1] if you want something non-traditional from a computing perspective. The roto-molded Gators were my personal favorite when I was trafficking a lot in rack systems... even more tangentially I really wish I'd help onto my ADA rig (MP-1, Quadraverb, Microtube 200) as now they're going for stupid amounts of money.
I remember TechnoTim (on YouTube/Twitch) installed a fully enclosed rack that had temperature-controlled fans at the top for extra ventilation. It seemed like he liked how it muted the noise a bit, but I'm not sure how much.
tripplite has an 18u (SRQ18U) than can do 37" deep; I've had one for a few years and it works well. You'll still here 40mm screamers running at full speed, but it cuts down on noise a lot.
the title is very misleading. he downgraded to lower-powered parts to manage assembling a system without modifications in the space restraints. all well and good, especially given the hardware was sponsored. but it should not be titled as such imo.
"moving" to me implies that the 3080 was somehow made to work inside that case. regardless, the conversation of this thread has some interesting takes about this concept.
That old server has gone through many, many incarnations of hardware.