- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Nebius - NBIS - AI Infrastructure Company
Posted on 10/10/25 at 7:12 pm to bayoubengals88
Posted on 10/10/25 at 7:12 pm to bayoubengals88
quote:
That was yesterday I think.
Correct. Bad date on my feed.
Anybody hear from IT? Need to send out a search party to the bar.
Posted on 10/10/25 at 7:58 pm to igoringa
quote:I ended the day with 50 Nov and 30 Jan calls.
Ofcourse those of us with options are not exactly thinking super long term lol
But I’m also short 10 calls and have 20 SQQQ calls that up 25 percent. I should have bought 100.
If we go below $118 I’ll start selling shares to buy January LEAPS.
If we reach $110 I may sell more shares for March $130s
Posted on 10/10/25 at 9:03 pm to bayoubengals88
At what point during the day did you pivot to SQQQ?
Posted on 10/10/25 at 9:06 pm to SquatchDawg
quote:
Anybody hear from IT? Need to send out a search party to the bar.
Here and accounted for. Had a long arse drive to Cape Canaveral all afternoon.
Sucks arse. frick Trumps actions. Cost me hundreds of thousands of dollars for his stupid arse ego
Posted on 10/10/25 at 9:32 pm to IT_Dawg
IT - you going to be able to handle being on a boat all week next week? lol you keeping options on during them at time - gotta assume monitoring will be more challenging
Posted on 10/10/25 at 10:25 pm to IT_Dawg
Good to hear from ya.
It’s never easy…but NBIS made it feel that way.
Dust off young man, this is just a speed bump. The only reason I got into NBIS in the first place was the March Tariff Tantrum. I’m going to be selling shite to take advantage of this next week.
It’s never easy…but NBIS made it feel that way.
Dust off young man, this is just a speed bump. The only reason I got into NBIS in the first place was the March Tariff Tantrum. I’m going to be selling shite to take advantage of this next week.
Posted on 10/11/25 at 3:44 am to igoringa
quote:
IT - you going to be able to handle being on a boat all week next week? lol you keeping options on during them at time - gotta assume monitoring will be more challenging
Yup. Especially now. Can’t miss the ride back up when Trump gets his head out of his arse about adding another 100% tariff on top of the existing tariffs….just a fricking moronic thing to do. And I love DJT, but that’s just childish and stupid
Posted on 10/11/25 at 7:15 am to SquatchDawg
quote:1:04 CST
At what point during the day did you pivot to SQQQ?
Posted on 10/11/25 at 7:42 am to IT_Dawg
quote:
Had a long arse drive to Cape Canaveral all afternoon.
You are one hour north of me. Looks like a good week of weather in the region. Have a good time.
Posted on 10/11/25 at 8:51 am to bayoubengals88
I feel good about my 11/21s and 1/16s.
Also, have some dry powder and can sell some of my gold stocks which are up if I need more. Next week (two weeks) is going to be very interesting. Trying to pu together a shopping list now.
Also, have some dry powder and can sell some of my gold stocks which are up if I need more. Next week (two weeks) is going to be very interesting. Trying to pu together a shopping list now.
Posted on 10/11/25 at 10:31 am to SquatchDawg
quote:
Trying to pu together a shopping list now.
Ditto. I have some 11/21s and some 3/20s. Make look in between those to dates. Feels like a few bumps coming. Bounceback from Friday, lead up to earnings report, earnings report, and a possible new hyperscaler deal.
Posted on 10/11/25 at 10:31 am to SquatchDawg
This thing dips below $110, I’m loading up with another $15,000 long buy to add to stockpile.
We have seen this show before. Tariffs threatened. Tech stocks sink. Announcement of the framework of an agreement. Tech stocks soar!
Stay long on this.
We have seen this show before. Tariffs threatened. Tech stocks sink. Announcement of the framework of an agreement. Tech stocks soar!
Stay long on this.
Posted on 10/11/25 at 2:34 pm to Covingtontiger77
Just a reminder of why we’re all here…
This is me using Grok to explain NBIS as a market disrupter using two points from Roman Chernin, co founder and Chief Business Officer:
### Why Nebius Stands Out for AI Workloads: A Deep Dive into Roman's Key Points
Based on Roman's emphasis on **specialization in AI-centric workloads** and **developer-friendly "Lego blocks" that prioritize time-to-value without sacrificing performance**, Nebius ($NBIS) emerges as a compelling alternative to general-purpose hyperscalers like AWS, Google Cloud, and specialized neoclouds like CoreWeave and Lambda Labs. While competitors offer GPU access, Nebius's AI-native architecture—built from hardware to software for large-scale training and inference—delivers tangible edges in efficiency, speed, and usability. Below, I'll break it down by Roman's two core reasons, contrasting with the others using real-world benchmarks, customer feedback, and market positioning.
#### 1. **Specialization: AI-Centric from the Ground Up for Superior Performance**
Roman's point underscores that Nebius isn't retrofitting general cloud infrastructure for AI—it's engineered exclusively for distributed training and inference at scale. This yields **best-in-class performance metrics**, like linear scaling on massive clusters and near bare-metal efficiency, which hyperscalers and even other neoclouds struggle to match due to their broader focus.
- **Vs. Hyperscalers (AWS, Google Cloud)**: These giants dominate ~70% of the cloud market but treat AI as an add-on to legacy workloads (e.g., web apps, storage). Their data centers mix GPUs with general compute, leading to suboptimal density and higher latency for AI tasks. For instance, Nebius achieves 3.2 Tbit/s per host via InfiniBand networking in AI-only facilities, while AWS/GCP's mixed-use setups dilute performance. In MLPerf Training 5.0 benchmarks, Nebius set records training a 405B-parameter model with 90%+ GPU utilization—4x faster than typical hyperscaler runs—thanks to custom servers, racks, and a Kubernetes-Slurm hybrid orchestration stack. Customers like SieveStack (building AI for drug discovery) report 4x faster model training on Nebius vs. AWS, with 30-50% reduced delays from test to production.
- **Vs. Neoclouds (CoreWeave, Lambda)**: CoreWeave excels in raw scale (100K+ GPU clusters) and rapid provisioning (35x faster spin-ups), but its Kubernetes-native setup feels "scattered" for pure AI workflows, per developer forums. Lambda focuses on on-prem/private clusters with early NVIDIA access, but lacks Nebius's end-to-end vertical integration (e.g., in-house power/cooling optimized for inference). Nebius's AI-only design hits higher Model FLOPs Utilization (MFU) for training, and its Inference-as-a-Service (optimized for models like Llama/Flux) delivers lower latency at scale. A direct H200 GPU benchmark shows Nebius at $3.50/hour crushing AWS ($4.50) and CoreWeave ($6.30), with better reliability via auto-healing systems—addressing complaints of longer support waits on CoreWeave/Lambda.
This specialization isn't just hype: Nebius's $17.4B Microsoft deal (for 200K NVIDIA GB300 GPUs through 2031) validates it as a go-to for hyperscalers outsourcing AI overflow, as they can't build fast enough internally.
#### 2. **Developer Pain Points: "Lego Blocks" for Faster Time-to-Value with Uncompromised Performance**
Nebius's platform acts as modular building blocks—pre-optimized clusters, APIs, Terraform support, and managed services—for AI devs to prototype, train, and deploy without wrestling integrations. This slashes time-to-value (e.g., from weeks to days) while maintaining high performance, a pain point for fragmented alternatives.
- **Vs. Hyperscalers (AWS, Google Cloud)**: AWS/Azure/GCP bundle AI with vast ecosystems (e.g., Bedrock/Vertex AI), but this creates bloat—devs spend 40-60% of time on setup, per industry reports, due to non-AI-optimized networking/storage. Nebius's full-stack (hardware + software) lets teams maintain 90%+ utilization out-of-the-box, with structured APIs and guides that "make integration smoother" than GCP's scattered docs. For enterprises like pharma/media, Nebius's neutrality (supports any model/provider) avoids lock-in, unlike Google tying inference to its TPUs.
- **Vs. Neoclouds (CoreWeave, Lambda)**: CoreWeave's automated lifecycle management abstracts ops well, but lacks Nebius's developer-centric tools (e.g., console/API for mixed cloud/HPC workflows). Lambda's "AI developer cloud" is budget-friendly for deep learning but trails in multitenancy and metadata management. Nebius's WEKA partnership delivers microsecond-latency storage at 2PB scale, enabling "outstanding throughput" for mixed workloads—far beyond Lambda's bare-metal focus. A research institution using Nebius+WEKA called it a "scalable, fully managed platform" that "exceeds expectations," accelerating AI apps without the vendor consolidation headaches of CoreWeave.
In essence, Nebius turns AI infra into a "subscription-like" utility: reserved-capacity deals via partners like TD SYNNEX convert capex to recurring revenue, with $10M ARR per MW density that's unmatched.
#### Bottom Line: Nebius as the "AWS of AI" for the Inference Era
Roman's reasons highlight why Nebius isn't just competitive—it's preferable for AI-first teams chasing speed and simplicity. Hyperscalers offer breadth but sacrifice AI depth; neoclouds like CoreWeave/Lambda provide GPUs but falter on seamless, performant dev experiences. With NVIDIA's $700M backing, global expansion (Europe/U.S./Iceland's renewables), and a shift to inference (the "real moat" of AI economics), Nebius is positioned for sticky, high-margin growth in a $260B+ market. If execution holds (e.g., Blackwell Ultra readiness), it's the nimble specialist outpacing the pack.
This is me using Grok to explain NBIS as a market disrupter using two points from Roman Chernin, co founder and Chief Business Officer:
### Why Nebius Stands Out for AI Workloads: A Deep Dive into Roman's Key Points
Based on Roman's emphasis on **specialization in AI-centric workloads** and **developer-friendly "Lego blocks" that prioritize time-to-value without sacrificing performance**, Nebius ($NBIS) emerges as a compelling alternative to general-purpose hyperscalers like AWS, Google Cloud, and specialized neoclouds like CoreWeave and Lambda Labs. While competitors offer GPU access, Nebius's AI-native architecture—built from hardware to software for large-scale training and inference—delivers tangible edges in efficiency, speed, and usability. Below, I'll break it down by Roman's two core reasons, contrasting with the others using real-world benchmarks, customer feedback, and market positioning.
#### 1. **Specialization: AI-Centric from the Ground Up for Superior Performance**
Roman's point underscores that Nebius isn't retrofitting general cloud infrastructure for AI—it's engineered exclusively for distributed training and inference at scale. This yields **best-in-class performance metrics**, like linear scaling on massive clusters and near bare-metal efficiency, which hyperscalers and even other neoclouds struggle to match due to their broader focus.
- **Vs. Hyperscalers (AWS, Google Cloud)**: These giants dominate ~70% of the cloud market but treat AI as an add-on to legacy workloads (e.g., web apps, storage). Their data centers mix GPUs with general compute, leading to suboptimal density and higher latency for AI tasks. For instance, Nebius achieves 3.2 Tbit/s per host via InfiniBand networking in AI-only facilities, while AWS/GCP's mixed-use setups dilute performance. In MLPerf Training 5.0 benchmarks, Nebius set records training a 405B-parameter model with 90%+ GPU utilization—4x faster than typical hyperscaler runs—thanks to custom servers, racks, and a Kubernetes-Slurm hybrid orchestration stack. Customers like SieveStack (building AI for drug discovery) report 4x faster model training on Nebius vs. AWS, with 30-50% reduced delays from test to production.
- **Vs. Neoclouds (CoreWeave, Lambda)**: CoreWeave excels in raw scale (100K+ GPU clusters) and rapid provisioning (35x faster spin-ups), but its Kubernetes-native setup feels "scattered" for pure AI workflows, per developer forums. Lambda focuses on on-prem/private clusters with early NVIDIA access, but lacks Nebius's end-to-end vertical integration (e.g., in-house power/cooling optimized for inference). Nebius's AI-only design hits higher Model FLOPs Utilization (MFU) for training, and its Inference-as-a-Service (optimized for models like Llama/Flux) delivers lower latency at scale. A direct H200 GPU benchmark shows Nebius at $3.50/hour crushing AWS ($4.50) and CoreWeave ($6.30), with better reliability via auto-healing systems—addressing complaints of longer support waits on CoreWeave/Lambda.
This specialization isn't just hype: Nebius's $17.4B Microsoft deal (for 200K NVIDIA GB300 GPUs through 2031) validates it as a go-to for hyperscalers outsourcing AI overflow, as they can't build fast enough internally.
#### 2. **Developer Pain Points: "Lego Blocks" for Faster Time-to-Value with Uncompromised Performance**
Nebius's platform acts as modular building blocks—pre-optimized clusters, APIs, Terraform support, and managed services—for AI devs to prototype, train, and deploy without wrestling integrations. This slashes time-to-value (e.g., from weeks to days) while maintaining high performance, a pain point for fragmented alternatives.
- **Vs. Hyperscalers (AWS, Google Cloud)**: AWS/Azure/GCP bundle AI with vast ecosystems (e.g., Bedrock/Vertex AI), but this creates bloat—devs spend 40-60% of time on setup, per industry reports, due to non-AI-optimized networking/storage. Nebius's full-stack (hardware + software) lets teams maintain 90%+ utilization out-of-the-box, with structured APIs and guides that "make integration smoother" than GCP's scattered docs. For enterprises like pharma/media, Nebius's neutrality (supports any model/provider) avoids lock-in, unlike Google tying inference to its TPUs.
- **Vs. Neoclouds (CoreWeave, Lambda)**: CoreWeave's automated lifecycle management abstracts ops well, but lacks Nebius's developer-centric tools (e.g., console/API for mixed cloud/HPC workflows). Lambda's "AI developer cloud" is budget-friendly for deep learning but trails in multitenancy and metadata management. Nebius's WEKA partnership delivers microsecond-latency storage at 2PB scale, enabling "outstanding throughput" for mixed workloads—far beyond Lambda's bare-metal focus. A research institution using Nebius+WEKA called it a "scalable, fully managed platform" that "exceeds expectations," accelerating AI apps without the vendor consolidation headaches of CoreWeave.
In essence, Nebius turns AI infra into a "subscription-like" utility: reserved-capacity deals via partners like TD SYNNEX convert capex to recurring revenue, with $10M ARR per MW density that's unmatched.
#### Bottom Line: Nebius as the "AWS of AI" for the Inference Era
Roman's reasons highlight why Nebius isn't just competitive—it's preferable for AI-first teams chasing speed and simplicity. Hyperscalers offer breadth but sacrifice AI depth; neoclouds like CoreWeave/Lambda provide GPUs but falter on seamless, performant dev experiences. With NVIDIA's $700M backing, global expansion (Europe/U.S./Iceland's renewables), and a shift to inference (the "real moat" of AI economics), Nebius is positioned for sticky, high-margin growth in a $260B+ market. If execution holds (e.g., Blackwell Ultra readiness), it's the nimble specialist outpacing the pack.
This post was edited on 10/11/25 at 2:40 pm
Posted on 10/11/25 at 2:34 pm to bayoubengals88
Nebius ($NBIS) outperforms AWS, Google Cloud, CoreWeave, and Lambda for AI workloads due to its specialized, developer-friendly platform, as Roman highlights. Here's why:
- **AI-Centric Design**: Built for distributed training/inference, Nebius achieves 90%+ GPU utilization, 4x faster than AWS/GCP in MLPerf Training 5.0 (405B model).
- **Superior Performance**: 3.2 Tbit/s InfiniBand networking per host vs. hyperscalers’ mixed-use setups; H200 GPU at $3.50/hour beats AWS ($4.50), CoreWeave ($6.30).
- **Time-to-Value**: Pre-optimized “Lego blocks” (clusters/APIs) cut setup time by 40-60% vs. AWS/GCP’s bloated ecosystems.
- **Developer Experience**: Neutral platform avoids lock-in, unlike Google’s TPU reliance; smoother APIs than CoreWeave’s scattered Kubernetes.
- **Storage Edge**: WEKA partnership delivers microsecond-latency at 2PB scale, surpassing Lambda’s bare-metal focus.
- **Reliability**: Auto-healing systems reduce downtime, addressing CoreWeave/Lambda’s longer support waits.
- **Market Validation**: $17.4B Microsoft deal for 200K NVIDIA GB300 GPUs cements Nebius’s lead.
- **Inference Focus**: Optimized for inference economics, positioning Nebius for $260B+ market growth.
Nebius’s AI-native stack and seamless dev tools make it the top choice for performance and speed.
- **AI-Centric Design**: Built for distributed training/inference, Nebius achieves 90%+ GPU utilization, 4x faster than AWS/GCP in MLPerf Training 5.0 (405B model).
- **Superior Performance**: 3.2 Tbit/s InfiniBand networking per host vs. hyperscalers’ mixed-use setups; H200 GPU at $3.50/hour beats AWS ($4.50), CoreWeave ($6.30).
- **Time-to-Value**: Pre-optimized “Lego blocks” (clusters/APIs) cut setup time by 40-60% vs. AWS/GCP’s bloated ecosystems.
- **Developer Experience**: Neutral platform avoids lock-in, unlike Google’s TPU reliance; smoother APIs than CoreWeave’s scattered Kubernetes.
- **Storage Edge**: WEKA partnership delivers microsecond-latency at 2PB scale, surpassing Lambda’s bare-metal focus.
- **Reliability**: Auto-healing systems reduce downtime, addressing CoreWeave/Lambda’s longer support waits.
- **Market Validation**: $17.4B Microsoft deal for 200K NVIDIA GB300 GPUs cements Nebius’s lead.
- **Inference Focus**: Optimized for inference economics, positioning Nebius for $260B+ market growth.
Nebius’s AI-native stack and seamless dev tools make it the top choice for performance and speed.
Posted on 10/11/25 at 2:35 pm to bayoubengals88
The "$260B+ market growth" in my earlier summary refers to the explosive expansion of the AI infrastructure and cloud AI market, where Nebius is positioned to capture significant share. Based on industry forecasts, this market is projected to grow from around $80-100B in 2025 to over $360B by 2030 (CAGR 32-39%), representing **$260B+ in cumulative value creation** during that period.
This growth is fueled by surging demand for GPU-accelerated training/inference, hyperscaler investments (e.g., $315B in 2025 alone), and inference workloads—Nebius's sweet spot—driving the "AI economics moat.
This growth is fueled by surging demand for GPU-accelerated training/inference, hyperscaler investments (e.g., $315B in 2025 alone), and inference workloads—Nebius's sweet spot—driving the "AI economics moat.
Posted on 10/11/25 at 4:13 pm to bayoubengals88
i am not convinced this Friday action is the start of a bear. Granted i worry a little on crypto action this weekend if it happens to be a proxy - but the data center build out is not impacted and has to happen - it is not like multiples are rich for the data center play when you take into account the pipelines. we will see i guess - it wasn't using some leverage it wouldn't even be thought lol
Posted on 10/12/25 at 9:56 am to igoringa
I agree but the bigger question is how long does this scuffle last. A day? A month? We can wait out a week or so but if it turns into a protracted dispute like in Feb-March we need to be defensive short term.
Posted on 10/12/25 at 10:03 am to SquatchDawg
quote:
I agree but the bigger question is how long does this scuffle last. A day? A month?
I was about to post a thread about this. How long till we bounce back?
Trump stated Nov 1. I think that's the hurdle. So the market will move up and down until it sees what Nov 1 does.
So yes. We need to be protective and we need a thread on that to help others out here.
One thing I keep telling my nephew. Stop going all in. Leave room for the "I can not control this" moments. If you are on margin, and are all in... you were screwed this week.
For me, It's buying opportunity because I never go past a 50% buffer. I will get stocks and etfs way cheaper during this time.
Posted on 10/12/25 at 10:44 am to BCreed1
The Chinese are already backtracking a bit. I think they will come to a deal before 11/1. This doesn't feel like the kind of deal that either side will be willing to go to the mat for.
It feels like the Chinese may be testing the boundaries to see how we react. Trump is overreacting to send the message, "Don't frick with us."
It feels like the Chinese may be testing the boundaries to see how we react. Trump is overreacting to send the message, "Don't frick with us."
Posted on 10/12/25 at 11:25 am to Jax-Tiger
I hope this is right!
I know I was able to To get into ETHT. My list were these before everything popped up. Now I can work my way into these:
NBIS... in at $82
ETHT... In at $80 (as of Friday)
CRWV... My next one.
NEBX... a 2x where calls are between 1K and 2K per contract sold, per month.
ETHT was down 21% on Friday, down from $98.68. I sold a call for $140 that ends Friday, and will sell next for the next month. The 98 call is roughly 1k.
NEBX... down 10% to $79. It was at 87. A call on 87 that end this Friday is roughly $620 premium sold. In Nov, (35 days out) depending if it stays in this area, roughly 2K.
My goal with sold calls (this does not include long term holds nor dividend) is to sell $5K per month.
I know I was able to To get into ETHT. My list were these before everything popped up. Now I can work my way into these:
NBIS... in at $82
ETHT... In at $80 (as of Friday)
CRWV... My next one.
NEBX... a 2x where calls are between 1K and 2K per contract sold, per month.
ETHT was down 21% on Friday, down from $98.68. I sold a call for $140 that ends Friday, and will sell next for the next month. The 98 call is roughly 1k.
NEBX... down 10% to $79. It was at 87. A call on 87 that end this Friday is roughly $620 premium sold. In Nov, (35 days out) depending if it stays in this area, roughly 2K.
My goal with sold calls (this does not include long term holds nor dividend) is to sell $5K per month.
This post was edited on 10/12/25 at 11:26 am
Popular
Back to top



1




