Ethernet Commoditizes Everything. Except EOS
Halfway through Tom Emmons’ scale-out segment at NFD40, a fellow delegate looked at Arista’s load-balancing slide and said what most of us were thinking… this could have any vendor’s logo on it. Tom’s answer was honest: it’s really just the polish. You can solve load balancing for individual tests, but tuning the configuration so it still performs across multiple workloads is the harder problem.

That candid moment reframed the rest of the session for me. The presentation was structured to avoid admitting it, but the hardware advantage is disappearing. Broadcom silicon is Broadcom silicon… two-tier leaf-spine, ECMP. All commodity. The differentiators Brendan Gibbs enumerated at the open (portfolio breadth, hierarchical hybrid buffering, high-radix modular systems) are real, but most of them are available in some form from every serious player in the space.
Arista’s real product is EOS, and the operational muscle memory built around it. The switches are the delivery mechanism.
EOS is the hook…
The thing Arista has that nobody else has is nearly twenty years of running a single operating system image across every platform they have ever shipped. Campus access, data center spine, DCI routers, AI back-end switches: all the same image, CLI, state model, BGP stack, and telemetry pipeline.
That’s an operational advantage competitors cannot replicate without rewriting their stack. Cisco runs at least four operating systems in production today. Juniper runs two with a third emerging. Nokia’s SR Linux is good but young. Arista is the only vendor where the engineer who configured your campus access layer can walk into the AI back-end and recognize what they are looking at.
Tom laid this out near the end of his segment: a single BGP stack, tested and deployed across all four AI fabrics, running on one EOS image. He framed it as why Arista can claim leadership across fabrics. That’s the lock-in, the cost of staying beats the cost of leaving, every single time. Retraining an entire operations team on a second OS, a second CLI, a second state model… that’s a bill nobody wants to pay.

CloudVision is the retention model
Praful’s observability segment was the last slot on the agenda, which is a mistake, because it’s where the argument closes. CloudVision is a time-series data lake (NetDL) that ingests every state change on every switch: MAC table updates, route table churn, BGP session flaps, config changes, VLAN modifications, all streamed, stored, and queryable against historical baselines. Calling it a NetOps dashboard undersells what it does by an order of magnitude.
On top of that, they have opened NetDL to third-party telemetry: compute via Prometheus and OpenMetrics, storage, AI job orchestrators, GPUs. So when a training job runs slow, the operator can correlate flow-level network data against the actual job, the specific GPU, the NIC, and the orchestrator state in one place. The jobs dashboard they demoed is the manifestation of that integration.

Once a team has built their runbooks, their alerts, their historical baselines, and their troubleshooting workflows around CloudVision, moving to another vendor means reconstructing all of that from scratch. That is a bigger switching cost than any proprietary ASIC feature.
Arista sells this as observability. I believe it’s retention.
Open marketing
Arista spends a lot of time on what they call their commitment to openness: open standards, Ethernet everywhere, no proprietary lock-in. They name-check OSFP, LPO, XPO, the Ultra Ethernet Consortium, and the OCP ESUN workstream as evidence.

Meanwhile, their actual differentiation is a proprietary operating system that runs on every box in their catalog.
Open is what you call the layers you can’t monetize. Proprietary is what you call the ones you can. Arista is doing both well, but the marketing team would prefer you not notice. The strategy is sound: commoditize the layers you don’t own (silicon, optics form factors, transport protocols), keep tight control of the one you do (EOS, CloudVision).
The NVIDIA framing is where this gets sharpest. Brendan opened with the usual line: we respect NVIDIA, we thank NVIDIA. Tom, to his credit, let the snark out later… “Really? A vendor would do that? Let me write that down.” Behind the diplomacy, Arista wants InfiniBand and NVSwitch to lose, and Ethernet to win scale-up the way it won scale-out. Once Ethernet wins a fabric, the silicon commoditizes fast and the contest moves up to the operating system and the data plane. Arista likes those odds.
XPO versus CPO
Vijay’s optics segment had the most technically interesting bet of the day, and it went by in about ninety seconds. Arista is betting on XPO (extra dense pluggable optic, their MSA) over co-packaged optics for most AI network roles. The reason, buried in an offhand line near the end: if one channel out of sixty-four fails in a CPO switch, you replace the whole switch.

That is a devastating concrete argument against CPO’s operational model, and it should have been a headline slide. Nobody who operates at Meta or Microsoft scale wants an architecture where a single optical channel failure takes out a switch’s worth of ports. The reliability math is brutal.

XPO preserves the pluggable model (you replace the module, not the switch) while capturing most of the density and power-efficiency benefits. It’s a conservative bet wearing an innovation story’s clothes. That is what makes it a good bet.
What Arista is actually betting
Arista believes Ethernet is going to win every tier of the AI network stack: front-end, scale-out, scale-across, and scale-up. Once that happens, the silicon commoditizes fast because Broadcom, Marvell, and in-house hyperscaler designs all converge on similar building blocks. When the silicon flattens, the differentiators move up the stack to the operating system, the data plane, and the operational model.
Arista has been building for that world since 2008 without calling it that. EOS is the bet, CloudVision is the insurance policy, the switches are the delivery mechanism. If the bet is right, the next decade of networking looks a lot like the last decade of cloud: hardware gets cheaper and more interchangeable every year, and the durable value sits in the control plane and the observability layer. Arista owns both of those in their space today.
The threat isn’t Cisco catching up or Nokia undercutting on price. It’s SONiC plus a credible open telemetry stack becoming viable for AI back-ends, and the operational lock-in Arista spent two decades building eroding in five. Arista built the last network OS anyone wants to run. The risk is that somebody does it in the open, and does it well enough to matter.
Is SONiC close? If you have thoughts, drop a note in the comments. I’d like to know what the operational gap looks like from the trenches.
Disclosure: I attended Networking Field Day 40 as a delegate. My travel and accommodations were covered, but I was not compensated for this post and the opinions are my own.
Links & Resources
Other Delegate Posts
- NFD40 – Arista Networking & AI - Peter Welcher, LinkedIn
- Arista @ NFD40: Modern Fabrics for AI - Jason Gintert, Bits in Flight