博威---云架构决胜云计算

 找回密码
 注册

QQ登录

只需一步,快速开始

搜索
12
返回列表 发新帖
楼主: network

三季度营收为351亿美元,季度环比增长17%。。四季度的营收预计约为375亿美元

[复制链接]
 楼主| 发表于 2024-2-23 16:03:59 | 显示全部楼层
Portfolio Update: I've been making adjustments ( including today) to my portfolio and this is how it looks like + the promised newest entry!
#EV, Power, AI, Computing, Robotics
$TSLA 40%
$NVDA 5%*
#Data Storage/ Warehouse/Analytics
$MDB 15%
$SNOW 10%
#Security, #Observability
$CRWD 15%
$ZS 10%
$DDOG 5%
Sold my tiny $NET & $HCP positions( I intend to buy back $HCP again.)

*I plan to make $NVDA a 10% position but will wait for their 1st Q earnings on 5/24/23

I intend to keep $TSLA at 40% for a while. If it goes up, I'll be trimming and adding $NVDA to bump it to a 10% position. During the GFC 2007-2008, I had bought a lot of $NVDA & sold it for a nice profit during the bitcoin mining frenzy. Here's why I started the position again!

I feel $NVDA will be up another 50% in a year.

An Inflection Point in Generative AI:

During the past 10 years or so the development in AI was mostly about perception and that helped fields like automated driving take off in a huge way. However, what we've witnessed in the recent few months with Generative AI and AI chatbots like ChatGPT is a step function change. This change is unprecedented and will change the trajectory of almost all businesses. Now AI is no longer about perception but also recommendation/ content generation. From the public cloud providers to all of tech, everyone has entered a race to leverage this LLM AI models.

All of this means that there'll be a huge demand for $NVDA Accelerated Computing Tools & GPUs!

Their Accelerated Computing Tools is a suite of tools, libraries, and technologies for developing applications with breakthrough levels of performance.

Combined with the performance of GPUs, these tools help developers start immediately accelerating applications on NVIDIA’s embedded, PC, workstation, server, and cloud datacenter platforms.

Nvidia is now all pervasive from the CSPs to the on-prem data centers right upto the to the edge. Even businesses that have rich troves of data ( that they might not want to share with others) can do all the training and inferencing.

Elon Musk is said to have recently purchased roughly 10,000 Nvidia GPUs for a generative AI project within Twitter.

There's a lot of collaboration going on between $GOOG and $NVDA: Google Cloud G2 VMs with NVIDIA L4 GPUs  is a cloud-industry first!

https://cloud.google.com/blog/products/compute/introducing-g2-vms-with-nvidia-l4-gpus

Risks:Microsoft reportedly accelerating their own on in-house AI chips but I don't think this will be significant.

Will have to see howTesla Dojo shapes up in the future. Dojo does have huge potential.

I think we've just scratched the surface when it comes to Generative AI and Accelerated Computing and the future looks very bright for $NVDA.



投资组合更新:我一直在对我的投资组合进行调整(包括今天),这就是它的样子+承诺的最新条目!

[url=https://twitter.com/hashtag/EV?src=hashtag_click]#EV
、电力、人工智能、计算、机器人
$TSLA 40%
$NVDA 5%*
#Data存储/仓库/分析
$MDB 15%
$SNOW 10%
#Security , #Observability
$CRWD 15%
$ZS 10%
$DDOG 5%
卖出了我的小额$NET$HCP头寸(我打算再次回购$HCP 。)
*我计划持有$NVDA 10% 的仓位,但会等待 23 年 5 月 24 日的第一季度收益
我打算将$TSLA保持在 40% 一段时间。如果它上升,我将修剪并添加$NVDA以将其提升到 10% 的位置。在 2007-2008 年全球金融危机期间,我购买了很多$NVDA并在比特币挖矿狂潮期间将其出售,获得了可观的利润。这就是我再次开始这个职位的原因!
我觉得$NVDA一年内还会再上涨 50%。
生成人工智能的拐点:

在过去十年左右的时间里,人工智能的发展主要是关于感知,这帮助自动驾驶等领域取得了巨大的发展。然而,近几个月来,我们目睹了生成式 AI 和 ChatGPT 等 AI 聊天机器人的功能逐步发生变化。这种变化是前所未有的,将改变几乎所有企业的发展轨迹。现在人工智能不再是关于感知,而是关于推荐/内容生成。从公共云提供商到所有技术公司,每个人都参与了利用 LLM AI 模型的竞赛。

所有这一切意味着对$NVDA加速计算工具和 GPU 的需求将会巨大!

他们的加速计算工具是一套工具、库和技术,用于开发具有突破性性能水平的应用程序。

这些工具与 GPU 的性能相结合,可帮助开发人员立即开始加速 NVIDIA 嵌入式、PC、工作站、服务器和云数据中心平台上的应用程序。

Nvidia 现在无处不在,从 CSP 到本地数据中心,一直到边缘。即使拥有丰富数据(他们可能不想与他人共享)的企业也可以完成所有培训和推理。

据称,埃隆·马斯克 (Elon Musk) 最近为 Twitter 内的一个生成人工智能项目购买了大约 10,000 个 Nvidia GPU。

$GOOG$NVDA之间正在进行大量合作:配备 NVIDIA L4 GPU 的 Google Cloud G2 VM 是云行业首创!

https://cloud.google.com/blog/products/compute/introducing-g2-vms-with-nvidia-l4-gpus

风险:据报道,微软正在加速开发自己的人工智能芯片,但我认为这不会产生重大影响。

未来还要看看 Tesla Dojo 会如何发展。 Dojo确实有巨大的潜力。

我认为,在生成式人工智能和加速计算方面,我们只是触及了皮毛, $NVDA的未来看起来非常光明。
























上午6:31 · 2023年4月29日
·
43.4万
查看






 楼主| 发表于 2024-10-6 08:28:43 | 显示全部楼层
"Winners dont play their best every single point, but they seem to always play their best when the points matter." - Jensen H

This is why I bet big on Jensen playing his best in AI. $NVDA

翻译自 英语
“胜利者不会在每一分都发挥出最佳水平,但他们似乎总是在关键时刻发挥出最佳水平。” - Jensen H

这就是为什么我坚信 Jensen 会在 AI 领域发挥出最佳水平。 $NVDA
 楼主| 发表于 2024-10-6 08:33:20 | 显示全部楼层
帖子

查看新帖子
对话
The AI Investor
@The_AI_Investor
"Overall, the team estimates a $500B per year infrastructure investment profile to support both new AI workloads and acceleration of existing CPU-centric traditional workloads."

- JPM on $NVDA

翻译自 英语
“总体而言,该团队估计每年的基础设施投资额为 5000 亿美元,以支持新的 AI 工作负载和现有以 CPU 为中心的传统工作负载的加速。”

- JPM 开启$NVDA
 楼主| 发表于 2024-10-6 08:34:30 | 显示全部楼层
$NVDA JPM Positive: 'remains on track to ship its next-generation Blackwell GPU platform in high volume production in 4Q' OW PT $155

NVIDIA remains on track to ship its next-generation Blackwell GPU platform in high volume production in 4Q......yields improving as expected and still expecting several billion dollars in Blackwell revenues in 4Q. The team characterized the early product yield issues as “behind the team” post a mask fix on the B200 GPU chip and the team is on track to ship several billions of dollars of Blackwell GPU platforms in its fiscal 4Q (jan-qtr) with a continued strong ramp into CY25. On the recent sell-side noise on rackscale portfolio changes (ie GB200 dual-rack 36x2 NVL72), the NVIDIA team cautioned investors on paying too much attention to the noise. Given certain recent Asia sell-side reports on a discontinuation of the NVIDIA’s GB200 dual-rack 36x2 NVL72 rackscale solution, the NVIDIA team cautioned investors on listening to the noise on portfolio changes and focused investors on the Blackwell GPU platform supporting over 100 different system configurations (including the NVL72 and NVL36 solutions) versus only 19-20 for the prior generation Hopper GPU platform. Our view is that the GB200 36x2 NVL72 platform is initially targeted to be a high volume platform given the lower power density per rack (versus full NVL72 stand alone rackscale solution), and lower infrastructure support costs required (less complex liquid cooling complexity required for the 36x2 NVL72). Sustainability of AI/accelerated compute spending beyond CY25: the team remains confident on continued growth in customer spending over the next several years – combination of AI, accelerated computing, and enterprise penetration. The team remains confident on the sustainability of XPU infrastructure spending over the next several years as GenAI/foundational models continue to scale exponentially (more training compute capability required), as inferencing continues to penetrate the markets, enterprise/sovereign AI initiatives are still in the early phases of development, and on the strong push towards accelerating existing workloads (data processing, database, analytics, etc.) in the existing traditional CPU-centric datacenter infrastructure. Overall, the team estimates a $500B per year infrastructure investment profile to support both new AI workloads and acceleration of existing CPU-centric traditional workloads. Competition from start-ups and China domestic AI developed solutions: Incumbency is key in these markets and NVIDIA dominates with its software/ecosystem/developer installed base and strong partnerships across the value chain (CSPs, OEMs, ODMs). Recently there have been several start-ups bringing their AI compute systems solution to market, with a target to focus on enterprise segments (both training and inference), and furthermore, a recent focus by the China government to encourage domestic AI companies to use domestic AI silicon suppliers. The NVIDIA team made a key point that the two markets where the NVIDIA platform / software / ecosystem has strong incumbency is in the enterprise and China markets with over 5M NVIDIA-based developers world-wide, support for NVIDIA platforms across all global CSPs, OEMs, and ODMs. Combined with its Enterprise AI software/framework platform (allowing customers to train and deploy their proprietary models to market faster) - all of this provides strong barriers to entry and made even more challenging for competitors given NVIDIA’s aggressive cadence of next-generation compute silicon and platform solutions.

翻译自 英语
$NVDA JPM 正面评价:“仍有望在第四季度大批量生产其下一代 Blackwell GPU 平台”增持目标价 155 美元

NVIDIA 仍有望在第四季度大批量生产其下一代 Blackwell GPU 平台......产量如预期般提高,预计 Blackwell 第四季度的收入仍将达到数十亿美元。团队将早期产品产量问题归咎于 B200 GPU 芯片上的掩模修复后的“团队落后”,该团队有望在第四财季(一季度)交付价值数十亿美元的 Blackwell GPU 平台,并在 CY25 继续保持强劲增长。对于最近卖方对机架规模投资组合变化(即 GB200 双机架 36x2 NVL72)的噪音,NVIDIA 团队提醒投资者不要过多关注这些噪音。鉴于最近亚洲卖方报道称 NVIDIA 的 GB200 双机架 36x2 NVL72 机架规模解决方案将停产,NVIDIA 团队提醒投资者注意投资组合变化的动向,并将投资者重点放在支持 100 多种不同系统配置(包括 NVL72 和 NVL36 解决方案)的 Blackwell GPU 平台上,而上一代 Hopper GPU 平台仅支持 19-20 种。我们认为,GB200 36x2 NVL72 平台最初的目标是成为高容量平台,因为它的每机架功率密度较低(与完整的 NVL72 独立机架规模解决方案相比),所需的基础设施支持成本较低(36x2 NVL72 所需的液体冷却复杂性较低)。25 年以后 AI/加速计算支出的可持续性:团队对未来几年客户支出的持续增长仍充满信心——AI、加速计算和企业渗透的结合。随着 GenAI/基础模型继续呈指数级增长(需要更多的训练计算能力),随着推理继续渗透市场,企业/主权 AI 计划仍处于早期发展阶段,以及在现有以 CPU 为中心的传统数据中心基础设施中加速现有工作负载(数据处理、数据库、分析等)的强劲推动下,该团队对未来几年 XPU 基础设施支出的可持续性充满信心。总体而言,该团队估计每年需要 5000 亿美元的基础设施投资,以支持新的 AI 工作负载和加速现有的以 CPU 为中心的传统工作负载。来自初创企业和中国国内 AI 开发解决方案的竞争:在这些市场中,现有企业是关键,而 NVIDIA 凭借其软件/生态系统/开发人员安装基础和整个价值链(CSP、OEM、ODM)的强大合作伙伴关系占据主导地位。最近有几家初创公司将其 AI 计算系统解决方案推向市场,目标是专注于企业领域(包括训练和推理),此外,中国政府最近还鼓励国内 AI 公司使用国内 AI 芯片供应商。NVIDIA 团队指出,NVIDIA 平台/软件/生态系统在企业和中国市场占据着重要地位,全球有超过 500 万 NVIDIA 开发人员,全球所有 CSP、OEM 和 ODM 都支持 NVIDIA 平台。结合其企业 AI 软件/框架平台(允许客户更快地训练和部署其专有模型到市场),所有这些都提供了强大的进入壁垒,鉴于 NVIDIA 积极推出下一代计算芯片和平台解决方案,这对竞争对手来说更具挑战性。
您需要登录后才可以回帖 登录 | 注册

本版积分规则

小黑屋|手机版|Archiver|boway Inc. ( 冀ICP备10011147号 )

GMT+8, 2024-11-22 01:07 , Processed in 0.094234 second(s), 15 queries .

Powered by Discuz! X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回复 返回顶部 返回列表