Category Archives: Networking

What Is Software-Defined Networking (SDN)?

Share

SDN, short for Software-Defined Networking, is a networking architecture that separates the control plane from the data plane. It involves decoupling network intelligence and policies from the underlying network infrastructure, providing a centralized management and control framework.

How does Software-Defined Networking (SDN) Work?

SDN operates by employing a centralized controller that manages and configures network devices, such as switches and routers, through open protocols like OpenFlow. This controller acts as the brain of the network, allowing administrators to define network behavior and policies centrally, which are then enforced across the entire network infrastructure.SDN network can be classified into three layers, each of which consists of various components.

  • Application layer: The application layer contains network applications or functions that organizations use. There can be several applications related to network monitoring, network troubleshooting, network policies and security.
  • Control layer: The control layer is the mid layer that connects the infrastructure layer and the application layer. It means the centralized SDN controller software and serves as the land of control plane where intelligent logic is connected to the application plane.
  • Infrastructure layer: The infrastructure layer consists of various networking equipment, for instance, network switches, servers or gateways, which form the underlying network to forward network traffic to their destinations.

To communicate between the three layers of SDN network, northbound and southbound application programming interfaces (APIs) are used. Northbound API enables communications between the application layers and the controller, while southbound API allows the controller communicate with the networking equipment.

What are the Different Models of SDN?

Depending on how the controller layer is connected to SDN devices, SDN networks can be divided into four different types which we can classify as follows:

  1. Open SDN

Open SDN has a centralized control plane and uses OpenFlow for the southbound API of the traffic from physical or virtual switches to the SDN controller.

  1. API SDN

API SDN, is different from open SDN. Rather than relying on an open protocol, application programming interfaces control how data moves through the network on each device.

  1. Overlay Model SDN

Overlay model SDN doesn’t address physical netwroks underneath but builds a virtual network on top of the current hardware. It operates on an overlay network and offers tunnels with channels to data centers to solve data center connectivity issues.

  1. Hybrid Model SDN

Hybrid model SDN, also called automation-based SDN, blends SDN features and traditional networking equipment. It uses automation tools such as agents, Python, etc. And components supporting different types of OS.

What are the Advantages of SDN?

Different SDN models have their own merits. Here we will only talk about the general benefits that SDN has for the network.

  1. Centralized Management

Centralization is one of the main advantages granted by SDN. SDN networks enable centralized management over the network using a central management tool, from which data center managers can benefit. It breaks out the barrier created by traditional systems and provides more agility for both virtual and physical network provisioning, all from a central location.

  1. Security

Despite the fact that the trend of virtualization has made it more difficult to secure networks against external threats, SDN brings massive advantages. SDN controller provides a centralized location for network engineers to control the entire security of the network. Through the SDN controller, security policies and information are ensured to be implemented within the network. And SDN is equipped with a single management system, which helps to enhance security.

  1. Cost-Savings

SDN network lands users with low operational costs and low capital expenditure costs. For one thing, the traditional way to ensure network availability was by redundancy of additional equipment, which of course adds costs. Compared to the traditional way, a software-defined network is much more efficient without the need to acquire more network switches. For another, SDN works great with virtualization, which also helps to reduce the cost for adding hardware.

  1. Scalability

Owing to the OpenFlow agent and SDN controller that allow access to the various network components through its centralized management, SDN gives users more scalability. Compared to a traditional network setup, engineers are provided with more choices to change network infrastructure instantly without purchasing and configuring resources manually.

In conclusion, in modern data centers, where agility and efficiency are critical, SDN plays a vital role. By virtualizing network resources, SDN enables administrators to automate network management tasks and streamline operations, resulting in improved efficiency, reduced costs, and faster time to market for new services.

SDN is transforming the way data centers operate, providing tremendous flexibility, scalability, and control over network resources. By embracing SDN, organizations can unleash the full potential of their data centers and stay ahead in an increasingly digital and interconnected world.


Related articles: Open Source vs Open Networking vs SDN: What’s the Difference

Layer 2, Layer 3 & Layer 4 Switch: What’s the Difference?

Share

Network switches are always seen in data centers for data transmission. Many technical terms are used with the switches. Have you ever noticed that they are often described as Layer 2, Layer 3 or even Layer 4 switch? What are the differences among these technologies? Which layer is better for deployment? Let’s explore the answers through this post.

What Does “Layer” Mean?

In the context of computer networking and communication protocols, the term “layer” is commonly associated with the OSI (Open Systems Interconnection) model, which is a conceptual framework that standardizes the functions of a telecommunication or computing system into seven abstraction layers. Each layer in the OSI model represents a specific set of tasks and functionalities, and these layers work together to facilitate communication between devices on a network.

The OSI model is divided into seven layers, each responsible for a specific aspect of network communication. These layers, from the lowest to the highest, are the Physical layer, Data Link layer, Network layer, Transport layer, Session layer, Presentation layer, and Application layer. The layering concept helps in designing and understanding complex network architectures by breaking down the communication process into manageable and modular components.

In practical terms, the “layer” concept can be seen in various networking devices and protocols. For instance, when discussing switches or routers, the terms Layer 2, Layer 3, or Layer 4 refer to the specific layer of the OSI model at which these devices operate. Layer 2 devices operate at the Data Link layer, dealing with MAC addresses, while Layer 3 devices operate at the Network layer, handling IP addresses and routing. Therefore, switches working on different layers of OSI model are described as Lay 2, Layer 3 or Layer 4 switches.

OSI model

Switch Layers

Layer 2 Switching

Layer 2 is also known as the data link layer. It is the second layer of OSI model. This layer transfers data between adjacent network nodes in a WAN or between nodes on the same LAN segment. It is a way to transfer data between network entities and detect or correct errors happened in the physical layer. Layer 2 switching uses the local and permanent MAC (Media Access Control) address to send data around a local area on a switch.

layer 2 switching

Layer 3 Switching

Layer 3 is the network layer in the OSI model for computer networking. Layer 3 switches are the fast routers for Layer 3 forwarding in hardware. It provides the approach to transfer variable-length data sequences from a source to a destination host through one or more networks. Layer 3 switching uses the IP (Internet Protocol) address to send information between extensive networks. IP address shows the virtual address in the physical world which resembles the means that your mailing address tells a mail carrier how to find you.

layer 3 switching

Layer 4 Switching

As the middle layer of OSI model, Layer 4 is the transport layer. This layer provides several services including connection-oriented data stream support, reliability, flow control, and multiplexing. Layer 4 uses the protocol of TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) which include the port number information in the header to identify the application of the packet. It is especially useful for dealing with network traffic since many applications adopt designated ports.

layer 4 switching

” Also Check –What Is Layer 4 Switch and How Does It Work?

Which Layer to Use?

The decision to use Layer 2, Layer 3, or Layer 4 switches depends on the specific requirements and characteristics of your network. Each type of switch operates at a different layer of the OSI model, offering distinct functionalities:

Layer 2 Switches:

Use Case: Layer 2 switches are appropriate for smaller networks or local segments where the primary concern is local connectivity within the same broadcast domain.

Example Scenario: In a small office or department with a single subnet, where devices need to communicate within the same local network, a Layer 2 switch is suitable.

Layer 3 Switches:

Use Case: Layer 3 switches are suitable for larger networks that require routing between different subnets or VLANs.

Example Scenario: In an enterprise environment with multiple departments or segments that need to communicate with each other, a Layer 3 switch facilitates routing between subnets.

Layer 4 Switches:

Use Case: Layer 4 switches are used when more advanced traffic management and control based on application-level information, such as port numbers, are necessary.

Example Scenario: In a data center where optimizing the flow of data, load balancing, and directing traffic based on specific applications (e.g., HTTP or HTTPS) are crucial, Layer 4 switches can be beneficial.

Considerations for Choosing:

  • Network Size: For smaller networks with limited routing needs, Layer 2 switches may suffice. Larger networks with multiple subnets benefit from the routing capabilities of Layer 3 switches.
  • Routing Requirements: If your network requires inter-VLAN communication or routing between different IP subnets, a Layer 3 switch is necessary.
  • Traffic Management: If your network demands granular control over traffic based on specific applications, Layer 4 switches provide additional capabilities.

In many scenarios, a combination of these switches may be used in a network, depending on the specific requirements of different segments. It’s common to have Layer 2 switches in access layers, Layer 3 switches in distribution or core layers for routing, and Layer 4 switches for specific applications or services that require advanced traffic management. Ultimately, the choice depends on the complexity, size, and specific needs of your network environment.

Conclusion

With the development of technologies, the intelligence of switches is continuously progressing on different layers of the network. The mix application of different layer switches (Layer 2, Layer 3 and Layer 4 switch) is a more cost-effective solution for big data centers. Understanding these switching layers can help you make better decisions.

Related Article:

Layer 2 vs Layer 3 Switch: Which One Do You Need? | FS Community

PCI vs PCI Express: What’s the Difference?

Share

PCI Vs PCI Express are two different versions of internal bus standards for connecting or injecting peripheral devices into equipment like computers, network servers. But do you know about their relations? And could you tell the differences in PCI Vs PCI Express? To figure out these questions, an exploration for PCI and PCI Express will be introduced in this post.

What Does PCI Vs PCI Express Stands for?

What Is PCI?

PCI, also called peripheral component interconnect, is a connection interface standard developed by Intel in 1990. Originally, it was only used in servers. Later on from 1995 to 2005, the PCI was widely implemented in computer and other network equipment like network switch. Most commonly, PCI is used as the PCI-based expansion card to insert into the PCI slot in a motherboard of a host or server. In the expansion card market, the popular PCI expansion cards are NIC card or network interface card, graphics card, and sound card.

What Is PCI Express?
PCI Express Network Card

Figure 1: PCI Express Network Card

PCI Express, also abbreviated as PCIe, refers to the peripheral component interconnect express. As the successor of PCI, PCI Express is also a type of connection standard carried out by Intel in 2001, which provides more bandwidth and is more compatible with existing operating systems than PCI. Similar like PCI, PCIe also can be used as expansion cards like PCIe Ethernet card to insert into PCI Express slot.

Comparison of PCI Vs PCI Express

As the replacement of PCI, PCI Express differs with it in several aspects, such as working topology and bandwidth. In this part, a brief comparison of PCI Vs PCI Express will be made.

PCI Vs PCI Express in Working Topology: PCI is a parallel connection, and devices connected to the PCI bus appear to be a bus master to connect directly to its own bus. While PCIe card is a high-speed serial connection. Instead of one bus that handles data from multiple sources, PCIe has a switch that controls several point-to-point serial connections.

PCI Vs PCI Express

Figure 2: PCI Vs PCI Express

PCI Vs PCI Express in Bandwidth: Generally, the fixed widths for PCI are 32-bit and 64-bit versions, running at 33 MHz or 66 MHz. 32 bits with 33 MHz, the potential bandwidth is 133 MB/s, 266 MB/s for 66 MHz, and 532 MB/s for 64 bits with 66 MHz. As for PCIe card, the bandwidth varies from 250 MB/s to several GB/s per lane, depending on its card size and version. For more detail, you can refer to the post: PCIe Card Tutorial: What Is PCIe Card and How to Choose It?

PCI Vs PCI Express in Others: With PCI Express, a maximum of 32 end-point devices can be connected. And they support hot plugging. While hot-plugging function is not available for PCI, it can only support a maximum of 5 devices.

FAQs About PCI Vs PCI Express

1. Is the speed for PCI slower than PCI Express?

Sure, the speed for PCIe is faster than PCI. Take the PCIe x1 as an example, it is at least 118% faster than PCI. It’s more obvious when you compare the PCIe-based video card with a PCI video card, the PCIe video card x16 type is almost 29 times faster than PCI video card.

2. Can PCI cards work in PCIe slots?

The answer is no. PCIe and PCI are not compatible with each other due to their different configurations. In most cases, there are both PCI and PCIe slots on the motherboard, so please fit the card into its matching slot and do not misuse the two types.

3. What is a PCIe slot?

PCIe slot refers to the physical size of PCI Express. By and large, there are four slot types: x16, x8, x4, and x1. The more the slot number, the longer the PCIe will be. For example, PCIe x1 is 25 mm in length, while PCIe x16 is 89 mm.

Summary

In this post, we make a comparison in PCI Vs PCI Express from their origin, working mode to their bandwidth, etc. In the final part, there are several frequently asked questions listed for your information. Hope this post will give you some inspiration in telling PCI Vs PCI Express.

25G Ethernet – How It Develops and What’s the Future of It?

Share

Have you ever heard of 25G Ethernet? It is a hot topic which is often mentioned these days. Then, what is it and how it develops? What’s the future of it? Let’s find all the answers together in the following text focusing on the developing process of 25G Ethernet.

What Is 25G Ethernet? Why Does It Appear?

25G Ethernet, or 25 Gigabit Ethernet, is a standard for Ethernet network connectivity in a data center environment. It is developed by IEEE P802.3by 25 Gb/s Ethernet Task Force. The IEEE 802.3by standard uses technology defined for 100 Gigabit Ethernet implemented as four 25 Gbps lanes (IEEE 802.3bj).

25G Ethernet to 100G

In addition to 10, 40 and 100GbE networking, 25G Ethernet technology continues to innovate and lay a path to higher networking speeds. Then, you may ask why it appears since we already have 40G. As you may know that 40GbE technology has evolved over the years and has gained some momentum as an option for enterprises, service providers and cloud providers. However, since the underlying technology for 40G Ethernet is simply four lanes at 10G speed, it does not offer the advantages in power consumption reduction when upgrading to 100G, which 25G can offer.

25G Ethernet can provide a simpler path to Ethernet speeds of 50Gbps, 100Gbps and beyond. With 25G, network operators are no longer forced to use a 40G QSFP port to go from one individual device to another to achieve 100G throughput.

Development of 25G Ethernet

Year 2014 – 25G Was First Introduced

The 25G Ethernet can be dated back to 2014. This is the year when 25G was first put forward. At that time, its cost and efficiency were discussed by a wide range of vendors when compared with 10G, 40G, and 100G. Some well-known hyper-scale data center and cloud computing providers such as Google, Microsoft, Broadcom, Arista, Mellanox, etc. have formed a special research group, namely 25G Ethernet Consortium, to explore the standardization of 25G Ethernet and promote the development of it.

Year 2015 – The First Batch of 25G Products Appeared

Stepping into the second year on 25G Ethernet exploration, the Consortium had a deeper and more comprehensive analysis of it. Researchers were conducting analysis of 25G Ethernet from various aspects, such as its demanding trends in data centers, advantages and applications, Q&As people may concern, etc. With the deepening of exploration, the standardization of 25G Ethernet has gradually taken shape, and suppliers have great expectations for the development of it.

As the initiators of the 25G Ethernet Consortium, Broadcom, Mellanox and Arista have stepped ahead of time and planned to launch their products for 25G development. Broadcom was ramping up production of its “Tomahawk” switch ASICs, and Mellanox had announced its Spectrum ASICs as well as adapter cards to support 25 Gb/s, 50 Gb/s, and 100 Gb/s speeds on servers. While, Arista joins list of vendors that are supporting the new 25G Ethernet standards with its three new switches, the 7060X, 7260X and 7320X, that support both 25 and 50 Gigabit Ethernet.

Year 2016-2017 – Fast Development of 25G

These two years have significant meaning for 25G Ethernet development. During the years, the IEEE approved the 802.3by specification for 25G Ethernet and other major suppliers are rushing to launch their own 25G products to comply with the market trend. 25G Ethernet has more practical applications in the data center.

802.3by specification for 25G Ethernet

In 2016, Marvell introduced industry’s most optimized 25GbE end-to-end data center solution with its newest Prestera switches and Alaska Ethernet transceivers. And Finisar introduced 25G Ethernet optics for high speed data centers with its SFP28 eSR transceiver enabling 300-meter links over existing OM3 MMF, and 25G SFPwire, an Active Optical Cable (AOC) with embedded technology that provides real-time troubleshooting and link performance monitoring as well. In addition, major server vendors including Dell, HPE, and Lenovo have 25G network adapters solutions. And as one of the members of 25G Ethernet Consortium, Mellanox offered SN2100 with 16-port 100G half rack width and can be used as 64-port 25G with breakout cables.

In 2017, 25G was recognized as the industry standard for next-generation server access rates. The related technical specifications such as 25G ToR switches and AOC cables are urgently needed to be finalized, and global organizations are actively competing to take the initiative. At that time, China’s ODCC (Open Data Center Committee) first introduced the 25G ToR switch specification and released details, which had become an important force in the rapid rise of 25G Ethernet.

As companies offer more and more different types of 25G SFP28 transceivers, DACs, and AOCs, the call for 25G Ethernet construction is getting higher and higher.

Year 2018 Till Now – Competition Against Other Network Products

2018 is a year of competition between 25G products and other products. During the year, sales of 10G products declined slightly. At the same time, 25G products received more and more recognition. In 2018, Supermicro opened path to 100G networking with new 25G Ethernet server and storage solutions. It offers a wide range of 25G NIC solutions that empower customers to future-proof nearly any Supermicro system by equipping it with 25G Ethernet networking technology. What’s more, Supermicro also offers a 25G switch (SBM-25G-100) with the X11 SuperBlade. This switch has twenty 25G downlink connections, four QSFP28 ports where each port can be configured as 40G or 100G uplink connections.

In any case, the arrival of 25G and its impact have given everyone confidence that data centers and suppliers can’t wait to plan for the era of 100G, 200G or even 400G.

How Far Can 25G Ethernet Go?

From all the above, you may have a general understanding of how 25G develops. At present, 25G is mainly used for switch-to-server applications. And it indeed gains ground in some aspects compared to 10G and 40G Ethernet. What’s more, you can see a clear trends of 25G market with a recent five-year forecast by industry analysts at the Dell’Oro Group below.

25G five year forecast

For a long run, it will go further since 25G switch offers a more convenient way to migrate to 100G or even 400G network.

Related Articles:

25G Switch Comparison: How to Choose the Suitable One?

Taking an In-depth Look at 25G SFP28

Everything You Should Know About Cumulus Linux

Share

Nowadays, many new small-and-medium-sized internet companies choose to use a bare metal switch with a third party network operating system (NOS) for network construction. The NOS they choose is consistent with the Open Network Install Environment (ONIE), a network OS installer which supports loading a network OS of choice, and then changing to a different network OS later. Among all the network operating systems, Cumulus Linux is a very popular choice. Then, what is Cumulus Linux? What are the advantages of this NOS? Is it reliable to use? Let’s find out the answers together in the following text.

What Is Cumulus Linux?

Cumulus Linux is a powerful open network operating system designed for data center network infrastructures. It accelerates networking functions on a network switch, acting as a platform for modern data center networking tools to get networks managed like servers. This Debian-based network operating system (NOS) can be run on hardware produced by a broad partner ecosystem. That is to say, you can accelerate networking constructs on a broad range of industry-standard switches from different vendors with various port densities, form factors and capabilities.

Cumulus Linux

Advantages of Cumulus Linux

In addition to the functions such as BGP and OSFP that a normal NOS enables, the Cumulus Linux has three main features that many other operating systems don’t support, namely Automation, EVPN and MLAG.

  • Automation: The biggest advantage of this feature is that it saves manpower by using automation tools. What’s more, it helps deployment and benefits from troubleshooting as well.
  • EVPN: The full name of “EVPN” is Ethernet virtual private networks. This modern interoperable technology can not only help you get rid of the complexity of the layer 2 but also allows legacy layer 2 applications to operate over next-generation layer 3 networks.
  • MLAG: It is an abbreviation for multi-chassis link aggregation group. As a new multi-device link aggregation technology for data center switch, MLAG configuration centralizes constituent ports on separate chassis, mainly serves as reliable load functionality to increase bandwidth and provide redundancy in emergent breakdown of one of the device.

Last but not least, in addition to the three main features, NCLU is another feature developed by Cumulus Networks to help those who have no idea how to use the Cumulus Linux OS. This is a function similar to the traditional CLI (Command Line Interface). It acts as a prompt command during configuration. Therefore, you don’t have to worry about the unfamiliarity to such a NOS you haven’t used before!

Is Cumulus Linux Stable?

Will the fault processing time be longer using the combination of a bare metal switch and an open network operating system Cumulus Linux? Actually, compared with the traditional network switch, the processing speed form this combination is basically the same as that of an Arista switch. It has very low latency as well.

In addition, the third-party systems such as Cumulus Linux based on Linux development have been very mature in today’s networking market just like the current operating systems (Windows, Linux, Redhat, Ubuntu, etc.) does. For example, FS N-serious switches are highly compatible with Cumulus Linux, and they both support EVPN and MLAG deployment.

FS N-serious switches with Cumulus Linux

Is Cumulus Linux Secure for My Data?

Of course it is secure. This NOS only process at the Control Plane. Your data is processed on hardware with chip and CPU. This is commonly known as isolation of the data layer and control layer.

Conclusion

From all the above, you may have a general understanding of the open network operating system Cumulus Linux. It is ideal to match with a bare metal switch in data center deployment. With this open NOS, you can accelerate networking constructs on switches from different vendors with various configurations easily and get easy deployment for future network construction.

Related Articles:

Network OS Systems for Bare Metal Switch

Network OS Comparison: Open Source OS or Proprietary OS

Cumulus Linux: A Powerful ONOS for Network

What Is Open Source Networking and How to Achieve It?

Share

The traditional network architecture has been unable to meet the needs of enterprises, operators and network users. It has complicated configuration on equipment and is slow in iteration. To solve this problem, open source networking has emerged! Then, what is open source networking? How to achieve it? Just read through this post to find the answers!

What Is Open Source Networking?

Open source networking, or open source network, is a new-generation network that offers you programmable customization service, centralized unified management, dynamic traffic monitoring, and automated deployment. It makes the transformation of the data center architecture change to virtualization and automation. This new-generation network focuses on technology decoupling, which Dell EMC calls an open network, is the core of the transformation to software defined network (SDN).

The SDN is a new network architecture proposed by the ONF (Open Networking Foundation) to facilitate the whole open source networking environment. This architecture separates the control plane from the data plane. It has central management for network state information Logically. And the underlying network hardware infrastructure is abstracted and defined by the upper layer application. With this architecture, enterprises and operators have unprecedented programmability, automation, and network control capabilities to build a highly scalable and flexible network to adapt to their changing business needs.

How to Achieve Open Source Networking?

The open source network involves the open networking stack from top to bottom. It starts from networking hardware disaggregation and modern 100G or 400G data center switch, and then to network operating systems, network controllers, virtualization, and orchestration. Therefore, to realize open source network, many aspects are involved. Among all these, the network operating systems and data center switches are essential in almost all the networks. Therefore, I’ll take a network switch from Dell as an example.

To realize the open source networking, the underlying open hardware platform is required. This can offer an open source network operating system. For example, the new Dell EMC Z9264F-ON switch offers optimum flexibility and cost-effectiveness for the web 2.0, enterprise, midmarket and cloud service provider with demanding compute and storage traffic environments.

Actually, there are many other examples as well. For instance, the FS bare metal switch N5850-48S6Q works well with the open source network operating system (NOS) Broadcom ICOS. It supports current and future data center requirements, including a x86-based control plane for easier integration of automation tools. Of course, it offers an ONIE installer for 3rd party network operating systems and compatibility with SDNs via OpenFlow 1.3.11 as well. Such combination can centrally manage and control network devices of different vendors and use the common API abstracted from the underlying network. It facilities the automation and management capabilities of the whole network.

Open source networking with Broadcom ICOS OS

Open Source Networking Advantages

  • With an open source networking, there is no need to configure each device as in the past or wait for vendors to release new products.
  • It offers a common open programming environment for operators, enterprises, third-party software vendors and network users, accelerating the innovation speed of new services and functions of network deployment.
  • The network reliability and security can be improved through automated centralized network device management, unified deployment strategies and fewer configuration errors.

Conclusion

From all the above, you may have a general understanding of “what is open source network” and how to achieve it. This new-generation network offers you a programmable, automated system to help build a highly scalable and flexible network. It is promising in future network reconstruction. You can achieve it with common solutions involved with network switch, open source network operating system, etc.

Related Articles:

Network OS Systems for Bare Metal Switch

Network OS Comparison: Open Source OS or Proprietary OS