I have just decided to write dedicated blog post series about DELL Force10 networking. Why?
Who knows me in person is most probably aware that my primary professional focus is on VMware vSphere infrastructure and datacenter enterprise hardware. Sometimes I have discussion with infrastructure experts, managers and other IT folks what is the most important/complex/critical/expensive vSphere component. vSphere component is meant by compute (servers), storage and networking. I thing it is needless discussion because all components are important and have to be integrated into single integrated system fulfilling all requirements, dealing with known constraints and mitigating all potential risks. Such integrated infrastructures are very often called POD which stands, as far as I know, for Performance Optimized Datacenter. These integrated systems are, from my point of view, new datacenter computers having dedicated but distributed computing, storage and networking components. I would prefer to call such equipment as "Optimized Infrastructure Block" or "Datacenter Computer" because it is not only about performance but also about reliability, capacity, availability, manageability, recoverability and security. We call these attributes infrastructure qualities and whole infrastructure block inherits qualities from sub-components. Older IT folks often compare this concept with main frame architectures however nowadays we usually use commodity x86 hardware "a little bit" optimized for enterprise workloads.
By the way that's one of the reason I like current DELL datacenter product portfolio because DELL has everything I need to build POD - server, storage systems and now also networking so I'm able to design single vendor infrastructure block with unified support, warranty, etc. Maybe someone don't know but DELL acquired EqualLogic and Compellent storage vendors some time ago, but more importantly for this blog post, DELL also acquired well known (at least in US) datacenter networking producer Force10. For official acquisition details look here.
But back to the networking. Everybody would probably agree that networking is very important part of vSphere infrastructure because of several reasons. It provides interconnect between clustered components - think about vSphere networks like Management, vMotion, Fault Tolerance, VSAN, etc. It also routes network traffic to the outside world. And sometimes it even provides storage fabrics (iSCSI, FCoE, NFS). That's actually the reason why I'm going to write this series of blog posts about DELL Force10 networking - because of networking importance. However I don't want to write about legacy networking but modern networking approach for next generation virtualized and software defined datacenters.
Modern physical networking is not only about hardware (burned intelligence in ASICs with high bandwidth, fast and low latency interfaces) but also in software. The main software sits inside DELL Force10 switches. It is switch firmware called FTOS - Force10 Operating System (see. for more general information about FTOS look here). However, today it is not only about switch embedded firmwares but also about whole software ecosystem - managements, centralized control planes, virtual distributed switches, network overlays, etc.
In future articles I would like to deep dive into FTOS features, configuration examples and virtualization related integrations.
Next, actually first technical article in this series will be about about typical initial configuration of Force10 switch. I know it is not rocket science but we have to know basics before taking off. In the future I would like to write about more complex designs, capabilities and configurations like
[ Next | DELL Force10 : Initial switch configuration ]
Who knows me in person is most probably aware that my primary professional focus is on VMware vSphere infrastructure and datacenter enterprise hardware. Sometimes I have discussion with infrastructure experts, managers and other IT folks what is the most important/complex/critical/expensive vSphere component. vSphere component is meant by compute (servers), storage and networking. I thing it is needless discussion because all components are important and have to be integrated into single integrated system fulfilling all requirements, dealing with known constraints and mitigating all potential risks. Such integrated infrastructures are very often called POD which stands, as far as I know, for Performance Optimized Datacenter. These integrated systems are, from my point of view, new datacenter computers having dedicated but distributed computing, storage and networking components. I would prefer to call such equipment as "Optimized Infrastructure Block" or "Datacenter Computer" because it is not only about performance but also about reliability, capacity, availability, manageability, recoverability and security. We call these attributes infrastructure qualities and whole infrastructure block inherits qualities from sub-components. Older IT folks often compare this concept with main frame architectures however nowadays we usually use commodity x86 hardware "a little bit" optimized for enterprise workloads.
By the way that's one of the reason I like current DELL datacenter product portfolio because DELL has everything I need to build POD - server, storage systems and now also networking so I'm able to design single vendor infrastructure block with unified support, warranty, etc. Maybe someone don't know but DELL acquired EqualLogic and Compellent storage vendors some time ago, but more importantly for this blog post, DELL also acquired well known (at least in US) datacenter networking producer Force10. For official acquisition details look here.
But back to the networking. Everybody would probably agree that networking is very important part of vSphere infrastructure because of several reasons. It provides interconnect between clustered components - think about vSphere networks like Management, vMotion, Fault Tolerance, VSAN, etc. It also routes network traffic to the outside world. And sometimes it even provides storage fabrics (iSCSI, FCoE, NFS). That's actually the reason why I'm going to write this series of blog posts about DELL Force10 networking - because of networking importance. However I don't want to write about legacy networking but modern networking approach for next generation virtualized and software defined datacenters.
Modern physical networking is not only about hardware (burned intelligence in ASICs with high bandwidth, fast and low latency interfaces) but also in software. The main software sits inside DELL Force10 switches. It is switch firmware called FTOS - Force10 Operating System (see. for more general information about FTOS look here). However, today it is not only about switch embedded firmwares but also about whole software ecosystem - managements, centralized control planes, virtual distributed switches, network overlays, etc.
In future articles I would like to deep dive into FTOS features, configuration examples and virtualization related integrations.
Next, actually first technical article in this series will be about about typical initial configuration of Force10 switch. I know it is not rocket science but we have to know basics before taking off. In the future I would like to write about more complex designs, capabilities and configurations like
- Multi-Chassis Link Aggregation (aka MC-LAG). In Force10 terminology we call it VLT - Virtual Link Trunking.
- Virtual Routing and Forwarding (aka VRF). Some S-series Force10 models with FTOS 9.4 support VRF-lite.
- some Software Define Networking (aka SDN) capabilities like python/perl scripting inside the switch, REST API, VXLAN hardware VTEP, Integration with VMware Distributed Virtual Switch, Integration with VMware NSX, OpenFlow, etc.
[ Next | DELL Force10 : Initial switch configuration ]
No comments:
Post a Comment