Discussion about this post

User's avatar
michael's avatar

My read on the inference report is that AMD inference chips surprisingly do not suck. Competitive until you go against GB. Far behind if it was tested against GB300 most likely. Imo, the biggest factor for inference in power consumption per token. Going to 2 nm is important for that.

Expand full comment
Neural Foundry's avatar

This is one of the sharpest takes on the AMD-OpenAI warrant deal I've seen. Your framing of warrants as 'sneaky discounts' that distort gross margins is brilliant - it makes the whole structure much clearer. The logic that OpenAI with 30-300 FTEs working on inference economics knows exactly what discount is needed to make AMD viable is devastating and elegant. I also appreciate your intellectual honesty here - you have massive financial skin in the NVDA game and still acknowledge the AMD warrant deal is genius. The observation that AMD will hit $600/share as that's where the last tranche vests is a clever catch. My only quesiton is whether the warrant dilution over time will be material enough to offset the revenue growth from actually winning these hyperscale deployments.

Expand full comment
14 more comments...

No posts