Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Uptime Lab's CM4 Blade Adds NVMe, TPM 2.0 to Raspberry Pi (jeffgeerling.com)
152 points by geerlingguy on Aug 4, 2021 | hide | past | favorite | 52 comments


To be clear, the blade adds TPM 2.0 via an Infineon chip, and you can interact with the chip, but features like secure boot require bootloader-level integration, and at this time, since no other Pi hardware has TPM, the Pi's closed-source bootloader doesn't support it.

Also something interesting to noodle on: technically the Pi 4/CM4/Pi 400 have ECC RAM [1] — but some ECC functionality may require CPU-level integration... which I'm guessing isn't present on the Pi's SoC (but who knows? I posted this forum topic asking for clarification: [2]).

[1] Product brief mentions LPDDR4 with on-die ECC: https://datasheets.raspberrypi.org/rpi4/raspberry-pi-4-produ...

[2] https://www.raspberrypi.org/forums/viewtopic.php?t=315415


On-die ECC means that the DRAM array in the chip stores ECC and the chip itself handles that transparently. The interface does not use ECC in this case. One of the reasons this is done is higher cell densities (just like in storage) and allowing longer refresh intervals (refreshing requires power regardless of usage).


The inclusion of TPM is definitely the most interesting aspect of this to me. I was initially turned off by the idea because I tend to associate them with nasty enterprise-y DRM types of stuff.

But I wonder why it's included here. Maybe for secure storage of private keys? I'm guessing that's the desired purpose for TPM in a server setting.

Obviously I'm not super familiar with TPM. I wonder if it can be used for things like better random number generation or hardware-accelerated hashing. I'm guessing if it supported hashing from userspace, the cryptocurrency crowd would have been all over it already.


> I wonder if it can be used for things like better random number generation...

Too slow to be useful, and the BCM283x SoC already has an internal RNG:

https://github.com/torvalds/linux/blob/master/drivers/char/h...

> ... or hardware-accelerated hashing

The TPM is on a really slow bus (33 MHz, 4 bits, tons of wait states). Even without any hardware acceleration, hashing on the CPU is much faster.


You wouldn't hit a TPM every time you wanted a random number. Your OS would just use it when it wanted to seed its entropy pool.

This is particularly useful at boot when entropy is very low.


TPMs are just SPI connected devices. You can add them to any SBC.

Hardware accelerated hashing would be severely limited by the SPI bus speed.


I believe the 2.0 standard does support rng, but I;m not sure of the specifics. there might be some hashing functionality but the interface is going to be too slow for large data. I think it;s there to do integrity checks for bootloaders and such


Without bootloader integration, what's the difference between adding a TPM vs an HSM like [0]? Does TPM just have a more standardized interface?

[0] https://www.zymbit.com/2020/11/10/blog-security-module-raspb...


You can actually add both—there's a partial GPIO header for the Zymkey 4i on the board.

But yeah, I think the idea is TPM is a bit more standardized across hardware, so some software that uses it would not need any tweaks to run on the Pi with a TPM built-in.


>But yeah, I think the idea is TPM is a bit more standardized across hardware

But there are USB HSMs, along with smart cards (which are also HSMs). Aren't those pretty standard?


Sort of; the communication protocol is standard (CCID), but the actual HSM interface varies. Yubikeys implement the OpenPGP smartcard interface, for example, as well as PKCS #11.

The TPM specification has its own crypto interface that is standard across all hardware, so you can do things like generate a key and perform crypto operations without requiring the hardware implement a particular interface beyond whatever TPM version you require.

There are advantages and disadvantages to both approaches. On Linux, TPMs are implemented in the kernel, and CCID is handled by userspace drivers.


The Zymbit is trying to make up for the lack of secureboot/tpm integration on the Pi by securing the enclosure and monitoring that perimeter.


At this level, I don’t understand the PoE requirement. If you’re already making “blades”, why not also make a proper backplane that includes power and network connections as part of the backplane. You’d probably be more power efficient and remove the need for so many network ports.

This would also increase the costs for the chassis (because you’d have to add a network switch), but you could probably also pack in more blades… But even if you kept the ethernet PHY ports, you’d probably be more power efficient with even just power on a backplane.


A backplane is nontrivial to design electrically/mechanically and expensive to manufacture (even just some high quality connectors will quickly run up your BOM). If you also want to carry networking there you'll end up having to do some semi-custom network switch design, which locks you into a particular switch/ASIC vendor 'forever', or at least vastly increases friction when wanting to upgrade. Yes, they already have their own PCBA and some mechanical design, but it's very simple compared to what it takes to design a reliable backplane with an integrated power supply and network switch.

At the end of the day this a low-cost system for low-cost devices. IMO the little benefit from having a backplane is not worth the R&D cost and the downsides of fully backplaned blade systems.

And this is not just about this project: from what I see, the industry seems to have rejected fully integrated blade systems. Dell's M1000e is dying, and I don't think I've seen HPE bladecenters in years. Instead, semi-integrated systems like Supermicro's high-density offering is king. No proprietary chassis management system, no proprietary network switches, no locking yourself into whatever the backplane can carry.


> And this is not just about this project: from what I see, the industry seems to have rejected fully integrated blade systems. Dell's M1000e is dying, and I don't think I've seen HPE bladecenters in years.

No offense but this is just wrong. HPe is all in with their synergy platform (the replacement to bladecenter that has been out for several years now). Cisco just released their next gen UCS blade platform as well. Dell has never been a dominant player in blades.

https://www.marketwatch.com/press-release/at-a-47-cagr-data-...


Not offended. This is a good reminder that this industry is far and wide enough that different people will see different subsets of it, and just because I observe something doesn't make it universal :).


At work we have oodles of 2U4N (8P) systems. Just going by eye I'd guess these have the same or even higher density than a blade center, while allowing more granular scaling, and being a standard form factor, and you get both front and back access to each node, so you can have whatever I/O you want, and you don't need a lift to handle the chassis.


> the industry seems to have rejected fully integrated blade systems

In my probably too simplistic mental model, the blade systems approach got overrun by the advent of multicore processors combined with virtualization and containers. While not 100% equivalent, it’s close enough and the price difference ends up making the decision in most scenarios.


totally agree, thanks


I think it's mostly convention and convenience—in the Pi ecosystem, most people using these things headless or as small servers are used to powering with PoE already.

Networking gear is often powered by PoE, but most servers have higher power requirements so a custom backplane or separate power method would be required.

Boards like the Turing Pi v2 (so far just prototypes and a marketing page on a website) would have a built-in network switch and power backplane (direct to each CM4), and I'm guessing another board or two like it will appear someday. I hope.


No, I get that... and there is a different level between hobbyist projects and larger production projects. Using the existing PoE "infrastructure" makes a lot of sense in the prototyping stages.

But if you're already going through the process to build custom boards for a cluster, then a backplane makes much more sense. We're not talking about a small 4-5 RPi cluster where everything can work off of a single PoE gigabit switch with a rat's nest of cat6. This project is talking about 16 of these blades in a 1U chassis. If you're going to go that far, a backplane isn't a big leap and would save power, space, and make cabling much easier.


The trick is, if your backplane is providing ethernet switching, then you need to provide for different levels of network needs.

Some people would be fine with a dumb N + 1 port gigE switch, which would be inexpensive, others will need 10G uplink, some 2x 10G with LACP so there's no bottleneck, some are going to want vlans or other managed switch style offerings, etc.

If the PoE power conversion really eats 6W per board though as mentioned in the article, that seems like a lot of power and an alternate power arangement would be a no-brainer.


Given the amount of these that will probably be made (I imagine it will be a low number), I could certainly see having at least two backplanes:

1) network passthru cables with power 2) 1G network switch with 1G or 10G uplink

I just don't see there being much of a market for these where you'd have to have many more options. But, I think the power option is really a no-brainer... if you have to put passthru RJ45 jacks on the back of it, then that would be fine. At least they'd be in the back and could be pre-wired.

I'm tempted to get a CM4 and try to put together something just to see if it would work. I really don't have a problem with much else on the blades as-is (aside from the TPM, I'm not sure what that gets you). I do like the ide of putting an SSD on the blade for storage.


Yeah, I could see a passthrough thing, where you had an edge connector (or something), that was the 8-pins for networking and two for +5V and the backplane was just a +5 rail, and the rj45 jack right next to each slot. As a zero experience with PCB design kind of person, this seems relatively simple. You could make a 'one blade backplane' with a PoE kit for the single/small number of devices case where PoE makes more sense.


Is there somewhere after the POE that power could be tapped in? Even just a pair of pins for 5v sounds like it might make for significant power savings.


I disagree that a true "blade" approach would be better, and believe PoE is a better alternative to PSU and backplane. My hope is Uptime Labs use the same approach as their current half-rack chassis - zero electronics, just hold the PoE-powered pseudo-blades.

Jeff Geerling made an extensive video series on the comparable $189 Turing Pi v1. In the final analysis of it, he breaks down how the networking part of the Turing Pi on it's own is $100 worth of chips. With the Turing Pi v1 holding 7 CM3 modules and the current Uptime Labs half-rack chassis holding 8 CM4 pseudo-blades, you might expect a fullsize chassis backplane supporting 16 blades requiring $200 worth of networking chips. Mind you, that $200 is for an unmanaged network with no VLANS or anything nice like that. Who knows how much more that would cost? $250? $300? More?

I've got two Turing Pi v1 boards. One is a dud because the network switch chip didn't work. It's a common problem if you read through the Turing Pi tech support Discord channel. These things are hard to get right. Leave it to a vertically-integrated network gear company to come up with something reliable.

I imagine two target consumer groups for these:

Group A will chose this over some Ampere Altra server, so the cost of a fancy new enterprise PoE switch is likely well within their budget.

Group B are Raspberry Pi hobbyists / "homelab" enthusiasts, and this is a group that's probably not averse to purchasing either a low-cost PoE switch on Amazon or a used enterprise PoE switch on eBay. There are many used gigabit PoE switches for sale at approximately the same price as a "managed" backplane (backplane parts only, ignoring R&D and QA overhead). Difference is, Group B gets both power and network for that price.

For both customer groups, the PoE switch is also reusable if they decide not to keep their Uptime Labs chassis, whereas a backplane wouldn't be. If only a fraction of the chassis capacity were used, then with a backplane, you've wasted money on idle backplane space. With the PoE approach, you have switch ports that can be connected to other things.


No one is going to choose this over an Altra server. If you need a server, you get a server, you don't go for a hacked together RPi solution.

For the other group -- I'm in that group. I have half a dozen RPis hooked together with various bits of PoE and duct tape. Most of them are doing something, but aren't very mission critical (unless you count streaming music around my house as mission critical!).

As much as I like the RPi for various projects, I don't think I could rationalize using something like this for work.

I'd like to have the networking built in, but realistically, I suspect it would be an easier sell to have a passthru backplane where you still don't have the physical RJ45 ports on the blade, but on the backplane. But, you don't provide any switching on the backplane... it would be a 1:1 blade:port setup on the chassis. You could always add a switching backplane if you needed. For me, the backplane idea is primarily about power. The PoE numbers suggested in the power don't scream efficiency.

Really, the chassis is still going to need it's own power supply anyway (for fans), so why not just expand that out to supply the RPis from one power supply?


Yup, this is only really practical for a rack you manage yourself. The colocation provider is going to charge you like $5 per network connection.


I like this idea. Can we do something similar to how Frame.work laptops work and just provide the connection to/from the chassis as USB-C?

It'd be great if the next version of the compute module would provide thunderbolt 4 instead. I think we'd be able to provide both power + 40GB nic support over one connection.


Can we talk about the 10" rack please ?

It appears that the rack posts are just chopped/hacked from some other rack ... but where are people sourcing these little 10" wide rack shelves ?

Also, the bottom component - is that a patch panel or a switch ?



Bottom component looks like a patch panel.

Plenty of 10” kit about e.g. https://datacabinetsdirect.co.uk/soho-10-inch-data-network-r...

10” soho rack appears to be magic search phrase

I've see a rack similar to the in the article before and can't find it now, but one way to recreate would be to buy soemthing like this a use a 10” blanking plate instead of the 19” one https://www.amazon.com/Procraft-Desktop-System-Washers-DTR-1...

Lots of music gear shops sell desktop racks BTW


"The board gets fairly warm, likely due to the overhead of the PoE+ power conversion (which consumes 6-7W on its own!"

7 W of losses? That's either wrong or insane for DC/DC regulation. Rough figures, pi4CM (5W) + NVMe (2W) = 7W. So regulation of 50% efficiency?


I found the same to be the case on the official Pi 4 PoE+ HAT (the new 2021 version)[1].

[1] https://www.jeffgeerling.com/blog/2021/review-raspberry-pis-...


PoE can't do "purely" DC/DC regulation. They require (in any reasonable setup) a transformer for isolation on both ends, which lowers efficiency, especially at lower end. Now with that said you can definitely get decent efficiency with PoE, it's just that most people don't care and so it's not as big of a focus.


I know this the Pi home lab hardware ecosystem is not the datacenter ecosystem, but what I really want on a Pi is not TPM but IPMI for remote management.


That’s funny. Because one of the more legitimate uses for a Pi in a data center is as a remote management KVM.

https://pikvm.org/


Ping @merocle on Twitter—the board's design is not 100% final, and there may be a few other features he could cram in (I've already thought of moving at least the serial UART connection to the front...).


Wait, since when is using several separate machines an acceptable way of computing? And especially counting total RAM as if all of it can be used when needed?

I've been doing that for a decade now (for cheapness mostly) and people said it was stupid and worse than a single computer with the same number of cores and RAM.

Also would those chips need heatsinks and cooling?


Using several separate machines is what you have to do when your computing needs don't fit in a single computer.

Of course, the total computing power here would fit in a single x86 computer, but it can be fun to play with clustering and this might be a lot less expensive than an x86 cluster.


> Wait, since when is using several separate machines an acceptable way of computing? And especially counting total RAM as if all of it can be used when needed?

It brings joy to Pi owners. There's more to the Pi than just the hardware.


Supercomputers have been built like that for a long time (decades) now, albeit with ultra-fast networking and storage to back them. There is a whole lot of research and money poured into specialized algorithms and software optimized for distributed machines. And that's far the only form of distributed computing in use.


What I want to do for a funsies project is have a bunch of Pi4 CMs plugged into a PCIE switch and a multi-host 10gige adapter.

I don't think this is possible though, I think I read somewhere that the Pi4 PCIE Root doesn't support host<->host comms.

If anyone knows different, speak up!


Noob here... Would it be possible to do some sort of provisioned storage over PCI-E? So rather than each module having its own M2 drive, there is a PCIe connection on the backplane back to an SSD Raid.


For geek cred you could put in another Network or Storage adapter and do ISCSI or FC. SAS would allow 2 to share storage, however I don't think SAS supports more then 2 initiators. Directly connecting every host to a pool of Flash chips would likely require a custom controller for the flash and a way to present it to each Raspberry Pi

This will almost certainly not make sense economically, but it seems a lot around RPi clustering doesn't make sense from a performance/$ perspective.


> Integrated PoE+ with Gigabit Ethernet

Are there any good PoE hats for Pi that won't melt everything around it, and have spare power for USB?


the only good TPM can bring to this world is to entice people to finally hack a open source bootloader for the pi :)


Getting kind of sick of OP reviewing products that we can't buy and, imo, probably won't ever be able to buy. Rather, sick of it making it to the HN front page.

https://news.ycombinator.com/item?id=27460885


Most of these products are able to be built by hobbyists (in fact, almost all of them are produced by hobbyists). The reason most fail to launch to production is not only the extreme difficulty of scaling from prototype to manufactured production quantities of product, but also from 2020 onwards, the chip shortages.

For each product I've been able to actually touch, there are ten more I've heard about and wanted to see happen, but they just couldn't be made because certain specialty ICs are just not available to small-time hobbyists/makers.

I don't tend to review these boards because I want people to go out and buy them. I review them because I am inspired to try out new ideas and try my hand at new things (circuit designs, PCB design, etc.), and I figure maybe some other people can get inspired to try too.

I try to feature as much open source hardware as possible, because it actually is possible for people to get custom PCBs and put together their own versions of them.


You don't have to read his posts, the base URL is right next to the title. It is very easy to skip an article if it is not your cup of tea.

Personally I love the posts of Jeff as I think they are very well written and well researched.


Ugh, what ?

I love seeing the Jeff Geerling posts and his blog is great!


Being off-the-shelf is an added bonus, but not a requirement. This website is called Hacker news, not Bestbuy news. As long as someone else can replicate the results from what is given in the post, I do think the post has enough merit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: