(Q|O)SFP are basically just raw high speed serial interfaces to whatever - you see this a lot in FPGAs, you can use the QSFP interfaces for anything high speed - PCIe, SATA, HDMI…
dcrazy 56 minutes ago [-]
> Although we can already buy commercial transceiver solutions that allow us to use PCIe devices like GPUs outside of a PC, these use an encapsulating protocol like Thunderbolt rather than straight PCIe.
> [snip]
> As explained in the intro, this doesn’t come without a host of compatibility issues, least of all PCIe device detection, side-channel clocking and for PCIe Gen 3 its equalization training feature that falls flat if you try to send it over an SFP link.
So, uh… what’s the benefit? How much overhead does Thunderbolt really introduce, given it solves these other issues?
jmyeet 22 minutes ago [-]
The benefits are twofold: physical colocation and bandwidth.
Thunderbolt 5 offers 80Gbps of bidirectional bandwidth. PCIe 5.0 16x offers 1024Gbps of bidirectional bandwidth. This matters.
TB5 cables can only get so long whereas fiber can go much farther more easily. This means that in a data center type environment, you could virtualize your GPUs and attach them as necessary, putting them in a separate bank (probably on the same rack).
mikepurvis 16 minutes ago [-]
"same rack" should still be fine for 1m passive TB5 cable though, right?
consp 15 minutes ago [-]
> 1024Gbps
Good luck getting a 1Tbit tranceiver. Anydirectional. Also it's 512Gbitish per direction.
jmyeet 11 minutes ago [-]
Bidirectional is a lot like biweekly. Biweekly depending on context means twice a week or once every two weeks and bidirectional can both mean per direction and total of both directions.
But yes I meant 512Gbps each way, to be clear.
mmastrac 2 hours ago [-]
This was a super interesting video to watch. I honestly thought SFP required more setup, but this explains why AliExpress is so ripe with USB3 and HDMI over SFP converters that are dirt cheap.
ahepp 25 minutes ago [-]
How does this compare to something like RDMA over Converged Ethernet (RoCE)?
Cool project! PCIe itself I think is likely to end up doing something similar soon, there are provisions in the spec now for optical retimers.
russdill 1 hours ago [-]
There's a number of optical modules for TB3 and TB4, might be an easier (but less fun) route as TB3 and TB4 can carry PCIe.
1 hours ago [-]
whalesalad 1 hours ago [-]
So you're saying I can put a handful of 4090's out in the middle of snowy Michigan with a handful of OM4 cables snaking into my basement to run legit arctic cooling with no noise?
myself248 27 minutes ago [-]
No part of Michigan is in the arctic, but sure, outside of mosquito season, that would work.
preisschild 46 minutes ago [-]
Might as well put your entire computer outside and use thunderbolt/usb-4 over fiber docks
phendrenad2 48 minutes ago [-]
Watercooling loop light be better, the radiator fins will still rust from condensation.
benjojo12 46 minutes ago [-]
I mean yes, but you could also just place the entire computer out there as well
While at a higher level, thunderbolt and https://en.wikipedia.org/wiki/ExpEther can both of course work over fiber too!
(Q|O)SFP are basically just raw high speed serial interfaces to whatever - you see this a lot in FPGAs, you can use the QSFP interfaces for anything high speed - PCIe, SATA, HDMI…
> [snip]
> As explained in the intro, this doesn’t come without a host of compatibility issues, least of all PCIe device detection, side-channel clocking and for PCIe Gen 3 its equalization training feature that falls flat if you try to send it over an SFP link.
So, uh… what’s the benefit? How much overhead does Thunderbolt really introduce, given it solves these other issues?
Thunderbolt 5 offers 80Gbps of bidirectional bandwidth. PCIe 5.0 16x offers 1024Gbps of bidirectional bandwidth. This matters.
TB5 cables can only get so long whereas fiber can go much farther more easily. This means that in a data center type environment, you could virtualize your GPUs and attach them as necessary, putting them in a separate bank (probably on the same rack).
Good luck getting a 1Tbit tranceiver. Anydirectional. Also it's 512Gbitish per direction.
But yes I meant 512Gbps each way, to be clear.
There is an interesting NSDI talk on the paper too - https://www.youtube.com/watch?v=kDJHA7TNtDk (2023)