|
楼主 |
发表于 2023-12-8 06:20:52
|
显示全部楼层
We all know GPUs are a bottleneck in AI right now. Everyone is scrambling to get their hands on those precious NVIDIA H100s. But where are those chips really going to? Below is my best attempt to map this out.
Some interesting takeaways:
我们都知道gpu现在是人工智能的瓶颈。每个人都在争先恐后地得到那些珍贵的NVIDIA h100。但这些筹码到底去了哪里呢?下面是我最好的尝试。一些有趣的收获:-大约一半的芯片是超大规模的(微软,亚马逊,谷歌,甲骨文)- Meta是一个惊喜给我-他们在做什么他们的150k gpu ?——另一个有趣的细分是“其他云提供商”,如CoreWeave和Lambda。它们会继续从超大规模企业手中夺走市场份额吗?根据我的研究,Inflection AI是唯一一家试图建立自己的GPU集群的初创公司(如果我错了,请纠正我)。请注意,这里的信息是基于公开信息和第三方来源。这是有方向性的,不是100%准确的。然而,有一些数据点让我相信我们是在正确的范围内:—基于这张图表,h100的总量约为62万。按3万美元/GPU计算,这相当于180亿美元的订单。请注意,预订并不一定转化为确认收入。然而,根据英伟达上季度数据中心的收入(145亿美元)来看,这“感觉”是对的。——据《金融时报》报道,台积电今年将向英伟达交付55万台h100,这与下面的分析大致相符。如果你觉得这项研究有帮助,请点赞或分享!可根据要求提供幻灯片。
-- Around half of the chips are going to the hyperscalers (Microsoft, Amazon, Google, Oracle)
-- Meta was a surprise to me – what are they doing with their 150k GPUs?
-- Another interesting segment is the “other cloud providers” such as CoreWeave and Lambda. Will they continue to take share away from the hyperscalers?
-- From my research, Inflection AI is the only startup that’s trying to build out their own GPU cluster (please correct me if I’m wrong).
Note that the Information here is based on publicly available information and third-party sources. It’s meant to be directional and not 100% accurate. However, there are a few data point that gives me confidence we’re in the right ballpark:
-- Based on this chart, total volume of H100s is around ~620k. At ~$30k/GPU, this would equate to ~$18B of bookings. Note that bookings don’t necessarily translate to recognized revenue. However, based on Nvidia’s last quarter’s data center revenue ($14.5B), this “feels” about right.
-- According to the Financial Times, TSMC is on track to deliver 550k H100s to Nvidia this year, which corroborates roughly with the analysis below.
Please like or share if you found this research helpful! Slide available upon request.
|
|