Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a fantastic result, but I am dying to know how the G.hn chipset creates the bit-loading map on a topology with that many bridge taps. In VDSL2 deployment, any unused extension socket in the house acts as an open-circuited stub, creating signal reflections that notch out specific frequencies (albeit usually killing performance).

If the author is hitting 940 Mbps on a daisy-chain, either the echo cancellation or the frequency diversity on these chips must be lightyears ahead of standard DSLAMs. Does the web interface expose the SNR-per-tone graph? I suspect you would see massive dips where the wiring splits to the other rooms, but the OFDM is just aggressively modulating around them.



A view from the the debugging tools since you asked https://thehftguy.com/wp-content/uploads/2026/01/screenshot_...

I don't think there is anything too fancy compared to a DSLAM. It's just that DSLAM are low-frequency long-range by design.

Numbers for nerds, on top of my head:

* ADSL1 is 1Mhz 8Mbps (2 kilometer)

* ADSL2 is 2Mhz 20Mbps (1 kilometer)

* VSDL1 is 15Mhz 150Mbps (less than 1 kilometer)

* Gigabit Ethernet is 100Mhz over four pairs (100 meters). It either works or it doesn't.

* The G.hn device here is up to 200 MHz. It automatically detects what can be done on the medium.


Gigabit Ethernet uses four pairs per direction. It uses the same four pairs in both directions at the same time.


1000Base-T uses two pairs per direction, actually. It's full duplex. Each port sees two TX and two RX pair.

There are four pair of wires in the cable. If you use all of them for TX, you can't receive.


> There are four pair of wires in the cable. If you use all of them for TX, you can't receive.

No, you absolutely can use them all for transmit and receive at the same time. The device at each end knows what signal it is transmitting, and can remove that from the received signal to identify what has been transmitted by the other end.

This is the magic that made 1000Base-T win out among the candidates for Gige over copper, since it required the lowest signaling frequencies and thus would run better over existing cables.


1000Base-T uses four pairs in both directions at the same time. It does this through the use of a hybrid in the PHY that subtracts what is being transmitted from what is received on the wires. 802.3ab is a fairly complicated specification with many layers of abstraction. I spent a few months studying it for a project about a decade ago.



Relevant section:

  Autonegotiation is a requirement for 1000BASE-T implementations as minimally the clock source for the link has to be negotiated, as one endpoint must be master and the other endpoint must be slave.

  1000BASE-T uses four cable pairs for simultaneous transmission in both directions through the use of echo cancellation with adaptive equalization. Line coding is five-level pulse-amplitude modulation (PAM-5).

  Since autonegotiation takes place on only two pairs, if two 1000BASE-T interfaces are connected through a cable with only two pairs, the interfaces will complete negotiation and choose gigabit as the best common operating mode, but the link will never come up because all four pairs are required for data communications.

  Each 1000BASE-T network segment is recommended to be a maximum length of 100 meters and must use Category 5 cable or better.

  Automatic MDI/MDI-X configuration is specified as an optional feature in the standard that is commonly implemented. This feature makes it safe to incorrectly mix straight-through and crossover-cables, plus mix MDI and MDI-X devices.
(Slight edits)


> 1000Base-T uses two pairs per direction, actually. It's full duplex. Each port sees two TX and two RX pair.

you may be thinking of 1000Base-TX (TIA‐854) which uses 2 pairs in each direction, similar to 100Base-TX (IEEE 802.3u). whereas 1000Base-T (IEEE 802.3ab) uses all 4 pairs in both directions.

basically, the -TX are dual simplex with a set of wires for each direction and -T are full-duplex with the same wires used in both directions at the same time.


It's been many years since I implemented G.Hn hardware, but if memory serves the chipsets are typically able to split the available bandwidth into 1 or 2 MHz wide bins and choose different symbol densities and FEC levels for each bin. If you have a bin that has horrible reflections, you don't use it at all.

I also recall that the chipsets don't do toning automatically, and so it's up the the management device to decide when to re-probe the channel and reconfigure the bins.


I know nothing about DSL. But G.hn uses OFDM, and OFDM can do a cute trick in which it learns a complex number to multiply each subcarrier signal by. Since the subcarrier signals are literally just Fourier coefficients of the signal, this can equalize all kinds of linear time invariant signal issues so long as they’re reasonably compact in the time domain as compared to the guard interval. And I imagine that G.hn has some way to figure out which coefficients (subcarriers) are weak and avoid using or relying on them — there are multiple ways to do that.


You're right, G.hn will have the same issue as DSL here; all of the tiny bridge taps from the extra jacks will create small dips in the bitloading.

That being said, with 200MHz of spectrum to play with, the impact on rates should be negligible. With the 200MHz G.hn phone line profile (48KHz tone spacing), we get about ~1.5Gbps, so you can take some lumps and still get ~1Gbps throughput.

One big advantage though, G.hn is natively p2mp and each jack could have it's own G.hn endpoint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: