Author Archives: Priscilla.Luo

Different Applications for 10G SFP+ Cables

FacebookTwitterGoogle+LinkedInRedditTumblrShare

10G SFP+ optics are of various kinds, including DACs, AOCs, and other 10G SFP+ optics (10GBASE-SR/LR/ER/ZR and 10GBAE-T copper transceivers) plus patch cables and copper cables, which are widely adopted in data centers to connect servers, storage appliance and switches. Each of them has different application for different distance. Next, we will talk about these cables respectively.

10G DAC: Server to Switch Connectivity

Direct attach cable (DAC) is a type of sheathed high-speed cable featuring SFP connectors on either termination. The main utility of direct attach cables lies in connecting server to switch within the rack. Top-rack interconnections in data centers are made of 10g direct attach cables these days to provide better alternative to RJ 45 connectors, which are losing their foothold because of the bulkier interface and availability of very few equipment and protocol appearing in their compatibility matrix. For any short range connection measuring as small as 5 m to 10 m, a better performing direct attach cable offers easier and more affordable solution. Servers are typically connected to a switch within the same racks. DAC supports link length up to 7 m, making it perfect for servers to switch connections.

FS 10G DACs are available with different lengths with customized services being offered too. And every cable is individually tested on corresponding equipment such as Cisco, Arista, Juniper, Dell, Brocade and other brands, having passed the monitoring of FS intelligent quality control system. Part of the products are shown in the picture below.

10G AOC: Switch to Switch Connectivity

10G active optical cable (AOC) assemblies are high performance, cost effective I/O solutions for 10G Ethernet and 10G Fibre Channel applications, which can also be used as an alternative solution to SFP+ passive and active copper cables while providing improved signal integrity, longer distances, superior electromagnetic immunity and better bit error rate performance. They allow hardware manufactures to achieve high port density, configurability and utilisation at a low cost and a reduced power budget. Unlike DAC, which is often applied in short distance, AOC can achieve transmission distance up to 100 m, so they often used in switch to switch connections.

Servers or Switch to Switch Connectivity

FS active optical cable (AOC) assemblies use active circuits to support longer distances than standard passive or active SFP+ Copper Cables. FS offers Cisco compatible AOC which is designed for high speed, short range data link via optical fiber wire.

10G SFP+ Optics: Server/Storage to Switch Connectivity

10G SFP+ transceivers, including 10GBASE-SR/LR/ER/ZR and 10GBAE-T copper transceiver, are designed for CWDM and DWDM applications. The range of transceivers supports 850nm, 1310nm, 18 channel for CWDM applications and 40 channels for DWDM applications. These optical transceivers are available with short haul or long haul receivers. Since server or storage to switch connection requires reliable, scalable and high-speed performance, transceivers plus patch cables are usually adopted to achieve such a connection.

Server or Storage to Switch Connectivity

FS 10G transceivers are of various types, including GBIC, SFP+, XFP, X2, XENPAK optics, which can be deployed in diverse networking environments. With an industry-wide compatibility and strict test program, FS 10G SFP+ modules can give customers a wide variety of 10 Gigabit Ethernet connectivity options such as server/storage to switch connectivity.

Conclusion

Different cables are selected for different distance and application. Generally speaking, 10G DAC is perfect for short reach applications within racks, while AOCs are suitable for inter-racks connections between ToR and EoR switches. With excellent quality and lifetime warranty, FS 10G optics brings real-time network intelligence to the financial services market at 10 Gbps speeds. All the products mentioned in the previous text are in stock. For more information, please visit us at www.fs.com.

Differences Between Cloud Computing and Data Center

Many people may be confused about what is cloud computing and what is data center. They often ask questions like, “Is a cloud a data center?”, “Is a data center a cloud?” or “Are cloud and data center two completely different things?” Maybe you know your company needs the cloud and a date center. And you also know your data center needs the cloud and vice versa. But you just don’t know why! Don’t worry. This essay will help you have a thorough understanding of the two terms and tell you how they differ from each other. Let’s begin with their definition first.

What Is Data Center and Cloud Computing?
Difference between Cloud Computing and Data Center

The term “data center” can be interpreted in a few different ways. First, an organization can run an in-house data center maintained by trained IT employees whose job is to keep the system up and running. Second, it can refer to an offsite storage center that consists of servers and other equipment needed to keep the stored data accessible both virtually and physically.

While the term “cloud computing” didn’t exist before the advent of Internet. Cloud computing changes the way businesses work. Rather than storing data locally on individual computers or a company’s network, cloud computing entails the delivery of data and shared resources via a secure and centralized remote platform. Rather than using a company’s own servers, it places its resources in the hands of a third-party organization that offers such a service.

Cloud Computing VS. Data Center in Security
Cloud Computing VS. Data Center in Security

Since the cloud is an external form of computing, it may be less secure or require more work to ensure security than a data center. Unlike data centers, where you are responsible for your own security, you will be entrusting your data to a third-party provider that may or may not have the most up-to-date security certifications. If your cloud are placed on several data centers in different locations, each location will also need the proper measures to ensure the security.

A data center is also physically connected to a local network, which makes it easier to ensure that only those with company-approved credentials and equipment can access stored apps and information. The cloud, however, is accessible by anyone with the proper credentials anywhere that there is an Internet connection. This opens a wide array of entry and exit points, all of which need to be protected to make sure that data transmitted to and from these points are secure.

Cloud Computing VS. Data Center in Cost
Cloud Computing VS. Data Center in Cost

For most small businesses, cloud computing is a more cost-effective option than a data center. Because when you chose a data center, you have to build an infrastructure from the start and will be responsible for your own maintenance and administration. Besides, a data center takes much longer to get started and can cost businesses $10 million to $25 million per year to operate and maintain.

Unlike a data center, cloud computing does not require time or capital to get up and running. Instead, most cloud computing providers offer a range of affordable subscription plans to meet customers’ budget and scale the service to their actual needs. And data centers take time to build,  whereas cloud services are available for use almost immediately after registration.

Conclusion

Going forward, cloud computing services will become increasingly attractive with a low cost and convenient service. It creates a new way to facilitate collaboration and information access across great geographic distances while reducing the costs. Therefore, compared with the traditional data center, the future of cloud computing is definitely much brighter.

Are White Box Switches Equal to OEM Switches?

With a low cost and excellent performance, white box switch has been a hot topic in the past few years. However, the basic definition of white box switch is still vague and ambiguous as a result of various reasons. Firstly, no one has ever made an accurate and standard conception of white box switches before; secondly, manufacturer with different interests and demands will deliberately obscure the definition of white box switch; thirdly, people who are unaware of the truth of Internet tend to be wrongly informed, which also lead to chaos in its definition. Some even simply equate a white box switch with an OEM switch. So what is a white box switch exactly?

white box switch

How to Understand White Box Switches?

According to its literal meaning, white box switches refer to switches without a label. However, there exists a deep connotation in white box switches which means this kind of switches doesn’t focus on brand. Based on this core idea, to better understand white box switches, here we might as well divide them into the following three models:

  • Bare-mental switch. It is the fundamental type of white box switch with no network operating system loaded on them except a boot loader. Customers can purchase a software through a third party like Big Switch, Cumulus, and Pica8 or even write a software by themselves. They ask for hardware support from hardware vendors and software support from software vendors.
  • White box switch. In this model, the supplier will offer switches with both hardware and software (the supplier only provide one of them, either hardware or software, but they got the authority of another from their partners). So customers can seek support for both hardware and software from one supplier. Besides, there are options for customers to choose for both hardware and software.
  • OEM switches. The hardware and software of the switch are manufactured and provided by an OEM (original equipment manufacturer). These OEMs design and manufacture a switch as specified by another company to be rebranded or not branded. This kind of switch is also called white box switch by many people. And suppliers offering this service are called white box supplier, especially when the supplier is small and not well-known.
The Market for White Box Switches

With a wide choice of networking software based on low-cost, commodity hardware, white box switches are bound to have a vast market in the future. Also, with the deployment of SDN, there is an increasing interest in white box switches within the IT community. In the previous text, we have divided white box switches into three types. Next, I will analyze the market for white box switches based upon this classification.

The market for white box switches

  • Bare-mental switches have been most widely used with a customer group mainly from networking giants like Google, Facebook, and Microsoft. They purchase a bare-mental switch and develop networking software by themselves. In china, large companies like Baidu, Alibaba, Tecent, and JD also tried this model, with Baidu being the most successful example. The reason why these giants chose such a kind of white box switch is that they are confident and capable enough to handle the development and operation of the software for a switch. Besides, these major technology firm have an extremely large-scale network, which requires them to control the network completely by themselves.
  • The customers for the second type are mainly distributed abroad with only a few in China. They mainly come from large financial companies, international data corporation and some network operators, whose size may only behind those internet giants. Cost saving is the most important driving force for them to buy a white box switch. Also, part of these enterprises chose it just for the differentiated operating system provided by white box suppliers who are willing to satisfy their specific demands through customized service.
  • The customer for the third type is distributed both at home and abroad. Although the market for this part is smaller than the first two, it has the largest potential for its customer group involving a large number of VARs (value added resellers), system integrators, IT products providers and many medium-sized clients. They adopt a white box switch for varied reasons such as improving the production line and saving costs.
Summary

Through this essay, we can see clearly that white box switch is much more than an OEM switch and the latter can be classified as one kind of the former. With a lower cost, excellent performance and huge market potential, white box switch will definitely grow up as the mainstream for switch adoption.

The Evolution of Data Center Switching

Today, the traditional three-tier data center switching design has developed as a mature technology which had been widely applied. However, with the rapid growth in technology, the bottlenecks and limitations of traditional three-tier architecture keep emerging and more and more network engineers choose to give up such a kind of network architecture. So what’s the next best option for data center switching? The answer is leaf-spine network. For many years, data center networks have been built in layers that, when diagrammed, suggesting a hierarchical tree. As this hierarchy runs up against limitations, a new model is taking its place. Below, you will see a quick comparison between the two architectures, how they’ve changed and the evolution of data center switching.

Traditional Three-Tier Architecture

three-tier architecture

Traditional three-tier data center switching design historically consisted of core Layer 3 switches, aggregation Layer 3 switches (sometimes called distribution Layer 3 switches) and access switches. Spanning Tree Protocol was used between the aggregation layer and the access layer to build a loop-free topology for the Layer 2 part of the network. Spanning Tree Protocol had a lot of benefits including a relatively easy implementation, requiring little configuration, and being simple to understand. Spanning Tree Protocol cannot use parallel forwarding paths however, it always blocks redundant paths in a VLAN. This impacted the ability to have a highly available active-active network, reduced the number of ports that were usable, and had high equipment costs.

The Fall of Spanning Tree Protoco

From this architecture, as virtualization started to grow, other protocols started to take the lead to allow for better utilization of equipment. Virtual-port-channel (vPC) technology eliminated Spanning Tree blocked ports, providing an active-active uplink from the access switches to the aggregation Layer 3 switches, and made use of the full available bandwidth. The architecture also started to change from the hardware standpoint by extending the Layer 2 segments across all of the pods. With this, the data center administrator can create a central, more flexible resource pool that can be allocated based on demand and needs. Some of the weaknesses of three-tier architecture began to show as virtualization continued to take over the industry and virtual machines needed to move freely between their hosts. This traffic requires efficiency with low and predictable latency. However, vPC can only provide two parallel uplinks which leads to bandwidth being the bottleneck of this design.

The Rise of Leaf-Spine Topology

Spine-and-Leaf-Topology-Data-Center-Switching

Leaf-spine topology was created to overcome the bandwidth limitations of three-tier architecture. In this configuration, every lower-tier switch (leaf layer) is connected to each of the top-tier switches (spine layer) in a full-mesh topology. The leaf layer consists of access switches that connect to servers and other devices. The spine layer is the backbone of the network and is responsible for interconnecting all leaf switches. Every leaf switch is connected to every spine. There can be path optimization so traffic load is evenly distributed among the spine. If one spine switch were to completely fail, it would only slightly degrade performance throughout the data center. Every server is only a maximum number of hops from any other server in the mesh, greatly reducing latency and allowing for a smooth vMotion experience.

Leaf-spine topology can also be easily expanded. If you run into capacity limitations, expanding the network is as easy as adding an additional spine switch. Uplinks can be extended to every leaf switch, resulting in the addition of interlayer bandwidth and reduction of oversubscription. If device port capacity becomes a concern, a new leaf switch can be added. This architecture can also support using both chassis switches and fixed-port switches to accommodate connectivity types and budgets. One flaw of the spine-and-leaf architecture, however, is the number of ports needed to support each leaf. When adding a new spine, each leaf must have redundant paths connected to the new spine. For this reason, the number of ports needed can grow incredibly quickly and reduces the number of ports available for other purposes.

Conclusion

Now, we are witnessing a change from the traditional three-tier architecture to a spine-and-leaf topology. With the increasing demand in your data center and east-west traffic, the traditional network topology can hardly satisfy the data and storage requirements. And the increasingly virtual data center environments require new data center-class switches to accommodate higher throughput and increased port density. So you may need to purchase a data center-class switch for your organization. Even if you don’t need a data center-class switch right now, consider it next year. Eventually, server, storage, application and user demands will require one. The best-value and cost-efficient data center switch for your choice at FS.com.