![]() In other words, err on the side of too much bandwidth instead of too little.Īssuming that you have plenty of network ports, please do avail yourself of all of Pure's ports. ![]() The cumulative impact of additional CPU overhead is another factor when laying out your iSCSI network. In addition to time spent determining that data was dropped, the retransmission uses network bandwidth that could otherwise be used for new transactions.īe aware that with software-initiated iSCSI and NFS the network protocol processing takes place on the host system, and thus these might require more CPU resources than other storage options. Recovering from these dropped network packets results in large performance degradation. Any time a number of links transmitting near capacity are switched to a smaller number of links, such oversubscription is a possibility. From VMware's Best Practices (emphasis is mine):įor iSCSI and NFS, make sure that your network topology does not contain Ethernet bottlenecks, where multiple links are routed through fewer links, potentially resulting in oversubscription and dropped network packets. This configuration is particularly devastating for iSCSI. In one support case, each chassis only had two iSCSI connections to the core switch, providing, in the real world use, substantially less than 20Gb of bandwidth for all 64 hosts. This is the driving force behind 16Gb Fibre Channel and the coming 32Gb standard. For daily operations, this is usually fine, but if you have several high demand systems on this chassis a database, development systems, this configuration can behave like a bottleneck. On an 8Gb switch, this is eight hosts for each 8Gb port. We now have 8 entrance points for 64 hosts to communicate with storage, backup, virtual devices, etc. Sixteen 8Gb ports are funneled into an embedded switch with eight 8Gb external ports. Add Virtual Machines to each blade, let's say 4 VMs per blade, and what do we end up with?Ħ4 initiators share sixteen 8Gb ports. The oversubscription rate can get quite high if you use a hypervisor for your discrete servers. These will log into a core switch passing frames over to storage. This switch takes these 16 servers and performs a form of NAT, forwarding all of their traffic to a lesser number of ports commonly 4, and as many as 16 ports. Each of these servers connects to an internal HBA which connects to the embedded switch. UCS connects to an additional switch/bridge, a "Fabric Interconnect" and then to a core switch.Įach one of these steps increases oversubscription.įor example, a bladed chassis might have 16 discrete servers. All but UCS use a type of "dumb" switch (no zoning) which connects to a core fabric switch (this is true for FC and iSCSI). These systems commonly have a number of bladed servers that connect to an embedded switch over a copper bus. Many of our customers use a CPU chassis such as a Cisco UCS or a HP c7000. Create at least 8 sessions per host (or, again, use all interfaces on Pure).Verify all paths are clean address any CRCs or similar errors.Use all of the FlashArray's interfaces (critical for iSCSI performance).VLAN tagging is only supported in Purity 4.6.0+.Utilizing a closed network is recommended whenever possible. While the FlashArray encrypts data at rest, I/O sent over the wire is likely clear text, with the exception being if CHAP is in use. Introduces potential security concerns.Thus if you have a slower routed network you may not be taking full advantage of the FlashArray. ![]() One of the reasons for buying a Pure Storage FlashArray is for speed. The more hops you take the more time it takes to communicate back and forth.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |