Understanding Data Center Redundancy

Share

Maximizing the uptime should be the top priority for every data center, be they small or hyperscale. To keep your data center constantly running, a plan for redundancy systems is a must.

What Is Data Center Redundancy?

Data center redundancy refers to a system design where critical components such as UPS units, cooling systems and backup generators are duplicated so that data center operations can continue even if a component fails. For example, a redundant UPS system starts working when a power outage happens. In the event of downtime due to hazardous weather, power outages, or component failures, data center backup components play their role to keep the whole system running.

Why Is Data Center Redundancy Important?

It is imperative for businesses to increase uptime and recover more quickly from downtime, whether unexpected or planned. Downtime hurts business. It can have serious and direct impact on brand images, business operations, and customer experience, resulting in devastating financial losses, missed business opportunities and a tarnished reputation. Even for small businesses, unscheduled downtime can still cost hundreds of dollars per minute.

Redundancy configuration in data centers helps cut the risk of downtime, thus reducing losses caused by undesired impacts. A well-planned redundancy design means shorter potential downtime in the long run. Moreover, redundant components also ensure that data is safe and secure as data center operations keep working and never fail.

Redundancy is also a crucial factor in gauging data center reliability, performance and availability. The Uptime Institute offers a tier classification system that certifies data centers according to four distinct tiers—Tier 1, Tier 2, Tier 3 and Tier 4. Each tier has strict and specific requirements around data center redundancy level.

Different Levels of Redundancy

There is no one-size-fits-all redundancy design. Lower levels of redundancy mean increased potential downtime in the long run. While more redundancy will result in less downtime but increased costs of maintaining the redundant components. however, if your business model requires as little downtime as possible, this is often justifiable in terms of profit and overall net growth. To choose the right configuration for your business, it is important to recognize the capabilities and risks of different redundancy models, including N, N+1, N+X, 2N, 2N+1, and 3N/2.

N Model

N equals the amount of capacity required to power, backup, or cool a facility at full IT load. It can represent the units that you want to duplicate such as a generator, UPS, or cooling unit. For example, if a data center requires three UPS units to operate at full capacity, N would equal three.

An architecture of N means the facility is designed only to keep a data center running at full capacity. Simply put, N is the same as zero redundancy. If the data center facility is at full load and there is a hardware failure, scheduled maintenance, or an unexpected outage, mission-critical applications would suffer. With an N design, any interruption would leave your business unable to access your data until the issue is resolved.

N+1 or N+X Model

An N+1 redundancy model provides a minimal level of resiliency by adding a single component—a UPS, HVAC system or generator—to the N architecture to support a failure and maintain a full workload. When one system is offline, the extra component takes over the load. Going back to the previous example, if N equals three UPS units, N+1 provides four. Likewise, an N+2 redundancy design provides two extra components. In our example, N+1 provides five UPS units instead of four. So N+X provides N+X extra components to reduce risks in the event of multiple simultaneous failures.

2N Model

2N redundancy creates a mirror image of the original UPS, cooling system or generators to provide full fault tolerance. It means if three UPS units are necessary to support full capacity, this redundancy model would include an additional set of three UPS units, for a total of six systems. This design also utilizes two independent distribution systems.

With a 2N model, data center operators can take down an entire set of components for maintenance without affecting normal operations. Moreover, in the event of unscheduled multiple component failures, the additional set takes over to maintain full capacity. The resiliency of this model greatly cuts the risks of downtime.

2N+1 Model

If 2N means full fault tolerance, 2N+1 delivers the fully fault-tolerant 2N model plus an extra component for extra protection. Not only can this model withstand multiple component failures, even in a worst-case scenario when the entire primary system is offline, it can still sustain N+1 redundancy. For its high level of reliability, this redundancy model is generally used by businesses that cannot tolerate even minor service disruptions.

N+1 2N+1 redundancy

3N/2 Model

The three-to-make-two or 3N/2 redundant model refers to a redundancy methodology where additional capacity is based on the load of the system. If we consider a 3N/2 scenario, three power delivery systems will power two servers, which means each power delivery system utilizes 67% of the available capacity. Likewise, in a 4N/3, there will be four power delivery systems powering three workloads (three servers). The 3N/2 could be upgraded to 4N/3, but only in theory. This is because such an elaborate model has so many components that it would be very difficult to manage and balance loads to maintain redundancy.

3N/2 redundancy

What’s the Right One for You?

Choosing a redundant model that meets your business needs can be challenging. Finding the right balance between reliability and cost is the key. For businesses that require as little downtime as possible, higher levels of redundancy are justifiable in terms of profit and overall net growth. For those that do not, lower levels of redundancy are acceptable. They are cheaper and more energy-efficient than the other more sophisticated redundancy designs.

In a word, there’s no right or wrong redundancy model because it depends on a range of factors like your business goals, budget, and IT environment. Consult your data center provider or discuss with your IT team to figure out the best option for you.

Article Source: Understanding Data Center Redundancy

Related Articles:

What Are Data Center Tiers?

Data Center UPS: Deployments & Buying Guide

5 Factors to Consider for Data Center Environmental Monitoring

Share

What Are Data Center Environmental Standards

Data center environmental monitoring is vital for device operations. The data center architecture is divided into four layers where the equipment placed inside also affects the design of data center environmental standards.

  • Tier I defines data center standards for facilities with minimal redundancy.
  • Tier II provides redundant critical power and cooling components.
  • Tier III adds redundant transmission paths for power and cooling to redundant critical components.
  • Tier IV infrastructure is built on Tier Ⅲ and adds the concept of fault tolerance to infrastructure topology.

Enterprises must comply with fairly stringent environmental standards to ensure these facilities remain functional.

Evolution of Data Center Environmental Standards

As early as the 1970s and 1980s, data center environmental monitoring revolved around power facilities. For example, whether the environment where the power supply was located has proper isolation, whether the main power supply affected the operation of the overall equipment, but the cooling problem was rarely monitored. Some enterprises have explored cooling technologies to facilitate cooling in data centers, such as liquid cooling. Typically, enterprises used loud fans to control airflow. In some countries, the cost of electricity was high, so there was a greater emphasis on being able to supply enough electricity for a given system configuration.

In the 1990s, the power density of the rack became a considered issue of enterprise data center environmental standards. In the past, a simple power factor calculation could yield the required cooling value for a data center, but accurate cooling values could not be provided by the increasing rack densities. At this point, enterprises had to re-plan the airflow patterns of data center racks and equipment. This required IT managers to know more statistics when designing a data center, such as pressure drop, air velocity, and flow resistance.

By the early 20th century, power densities were still increasing, and thermal modeling was seen as a potential answer to optimizing the cooling of data center environments. The lack of necessary data implied that temperature data typically was collected after data center construction and then IT managers needed to make adjustments based on that information. Enterprises should choose the correct thermal model of equipment when building a data center to enhance data center environmental monitoring. Here are several environmental control methods when building a data center.

5 Factors in Data Center Environmental Controls

For ensuring the reliable operation of IT equipment within a rack, the primary concerns for monitoring and controlling data center environmental conditions are temperature, humidity, static electricity, physical and human safety. Moreover, data center environmental impact resulted from these factors not only on the ecological environment but also on data center security, energy efficiency, and the enterprise social image.

Temperature Control

Thermal control is always a challenging issue for data centers, as servers emit heat when they are running. If they are paralyzed by overheating, it will cripple data center operations. Temperature control can check if equipment is operating within the recommended temperature range. A temperature sensor is an effective method to solve temperature control. Placing them in strategic locations and reading the overall temperature allows IT managers to conduct temperature control promptly.

Humidity Control

Humidity control is closely related to temperature levels. High humidity can corrode hardware. Low humidity levels can cause electrostatic arcing problems. For this reason, cooling and ventilation systems need to detect and control the relative humidity in the room air. ASHRAE recommends operation within a dew point range of 41.9 to 59 degrees Fahrenheit with a maximum relative humidity of 60%. Datacenter designers need to invest in systems that can detect humidity and water near equipment to better monitor cooling fans and measure the presence of airflow during routine management. Of course, it is also possible to use a set of computer room air conditioner(CRAC) units on larger facilities to create consistent airflow that flows throughout the room. These CRAC systems typically work by drawing in and cooling heat, then expelling it as cool air through vents and air intakes leading to the server.

Electricity Monitoring

Static electricity is also one of the threats in the data center environment, it is an invisible nuisance. Some newer IT components can be damaged or completely fried by less than 25 volts of discharge. If this problem isn’t addressed, it might result in frequent disconnections, system crashes, and even data corruption. Unexpected bursts of energy in the form of electrostatic discharges may be the greatest threat to the performance of the average data center. To prevent such incidents, businesses must install energy monitors that are strategically located to detect the buildup of static electricity.

Fire Suppression

A comprehensive fire suppression system is a must-have feature in data center environmental standards. If an entire data center is to be protected from disaster, data center designers need to take security measures from fire and fire suppression systems to physical and virtual systems. Fire suppression systems are subject to regular testing and active monitoring of the data center to ensure that they will indeed do their job in the clutch.

Security Systems

Data security of data center environmental standards is also very important. IT departments must institute a limit that keeps intruders away from buildings as well as server rooms and the racks they are in. Setting up a complete range of physical security is a desirable method—from IP surveillance systems to advanced sensors. If unauthorized personnel is detected entering a building or server rack, it will alert data center managers.

Summary

The purpose of data center environmental monitoring is to provide a better operating environment for facilities and avoid some unplanned cases that affect the business of enterprises. For the above data center environmental controls, it is beneficial for enterprises to maintain data center security when designing data centers, which is conducive to data center management. Also, it properly controls the data center environmental impact on ecology and energy efficiency.

Article Source: 5 Factors to Consider for Data Center Environmental Monitoring

Related Articles:

Things You Should Know About Data Center Power

What Is Data Center Security?

What Is Edge Computing?

Share

In today’s data-driven world, where businesses rely on real-time information to make critical decisions, edge computing has become the technology of choice. By moving some portion of compute, and storage resources from a central data center and closer to the data source, latency issues, bandwidth limitations, and network disruptions are greatly minimized.

With edge computing, data produced in a factory floor or retail store are processed and analyzed at the network’s edge and within the premises. Since data doesn’t travel across networks, speed is one obvious advantage. This translates to instant analysis of data, faster response by site personnel, and real-time decision-making.

How Edge Computing Works

Edge computing brings computing power closer to the data source, where sensors and other data capturing instruments are located. The entire edge computing process takes place inside intelligent devices that speed up the processing of the various data collected before the devices connect to the IoT.

The goal of edge computing is to boost efficiency. Instead of sending all the data collected by sensors to the enterprise applications for processing, edge devices do the computing and only send important data for further analysis or storage. This is possible thanks to edge AI, i.e., artificial intelligence at the edge.

After the edge devices do the computation of the data with the help of edge AI, these devices group the data collected or results obtained into different categories. The three basic categories are:

  • Data that doesn’t need further action and shouldn’t be stored or transmitted to enterprise applications.
  • Data that should be retained for further analysis or record keeping.
  • Data that requires an immediate response.

The work of edge computing is to discriminate between these data sets and identify the level of response and the action required, then act on it accordingly.

edge computing

Depending on the compute power of the edge device and the complexity of the data collected, the device may work on the outlier data and provide a real-time response. Or send it to the enterprise application for further analysis in real-time with immediate retrieval of the results. Since only the important and urgent data sets are sent over the network, there’s reduced bandwidth requirement. This results in substantial cost savings, especially with wireless cellular networks.

Why Edge Computing?

There are several reasons why edge computing is winning the popularity battle in the enterprise computing world. Digital transformation initiatives, from robotics & advanced automation to AI and data analytics, all have one thing in common – they are largely data-dependent. Most industries that leverage these technologies are also time-sensitive, meaning the data they produce becomes irrelevant in a matter of minutes, if not seconds.

The large amounts of data currently produced by IoT devices strain a shared computing 8model due to system congestion and network disruption. This results in huge financial losses, injuries, and costly damages for time and disruption-sensitive applications. The attractiveness of edge computing often narrows down to the three network challenges it seeks to solve. These are:

Latency – a lag in the communication between devices and network delays decision-making in time-sensitive applications. Edge computing solves this problem using a more distributed network, which ensures there’s no disconnect in real-time information transfer and processing. This gives a more reliable and consistent network.

Bandwidth – Every network has a limited bandwidth, especially wireless communications. Edge computing solves bandwidth limitations by processing immense volumes of data near the network’s edge then only sending the most relevant information through the network. This minimizes the volume of data that requires a cellular connection.

Data Compliance and Governance – organizations that handle sensitive data, are subject to data regulations of various countries. By processing this set of data near the source, these companies can keep the sensitive customer/employee data within their borders, hence ensuring compliance.

Edge Computing Use Cases

Over the years, edge data centers have found several use cases across industries, thanks to rapid tech adoption and the benefits of processing data at the network edge. Ideally, any application that requires moving large amounts of data to a centralized data center before retrieving the result and insights could benefit a lot from edge computing. Below are the different ways several industries use edge computing in their day-to-day operations:

Transportation – autonomous vehicles produce around 5 to 20 terabytes of data daily from information about speed, location, traffic conditions, road conditions, etc. This data must be organized, processed, and analyzed in real-time, and insights fed into the system while the vehicle is on the road. This time-sensitive application requires accurate, reliable, and consistent onboard computing.

Manufacturing – several manufacturers now deploy edge computing to monitor manufacturing processes and enable real-time analytics. By coupling this with machine learning and AI, edge computing can help streamline manufacturing processes with real-time insights, predictive analytics, and more.

Farming – indoor farming relies on different sensors that collect a wide range of data that must be processed and analyzed to gain insights into the crops’ health, weather conditions, nutrient density, etc. Edge computing makes this data processing and insights generation faster, hence faster response and decision making.

The other areas where edge computing has been adopted include healthcare facilities to help patients avoid health issues in real-time and retail to optimize vendor ordering and predict sales.

Edge Computing Challenges

Edge computing isn’t without its challenges, and some of the common ones revolve around security and data lifecycles. Applications that rely on IoT devices are vulnerable to data breaches, which could comprise security at the edge. As far as data lifecycles are concerned, the challenge comes in with the large amount of data stored at the network’s edge. A ton of useless data may take up critical space; hence businesses should keenly choose the data to keep and discard.

Edge computing also relies on some level of connectivity, and the typical network limitations are another cause for concern. It’s, therefore, necessary to plan for connectivity problems and design an edge computing deployment that can accommodate common networking issues.

Implementing Edge Computing

Regardless of the industry you are in, edge computing comes with several benefits, but only if it’s designed well and deployed to solve the challenges common with centralized data centers. To get the most from your investment, you want to work with a reputed edge computing company or an expert IT consultant to guide you on the best way forward.

Article Source: What Is Edge Computing?

Related Articles:

Edge Computing vs. Multi-Access Edge Computing

Micro Data Center and Edge Computing

Why Green Data Center Matters

Share

Background

Green data centers appear in the concept of enterprise construction, due to the continuous growth of new data storage requirements and the steady enhancement of green environmental protection awareness. Newly retained data must be protected, cooled, and transferred efficiently. This means that the huge energy demands of data centers present challenges in terms of cost and sustainability, and enterprises are increasingly concerned about the energy demands of their data centers. It can be seen that sustainable and renewable energy resources have become the development trend of green data centers.

Green Data Center Is a Trend

A green data center is a facility similar to a regular data center that hosts servers to store, manage, and disseminate data. It is designed to minimize environmental impact by providing maximum energy efficiency. Green data centers have the same characteristics as typical data centers, but the internal system settings and technologies can effectively reduce energy consumption and carbon footprints for enterprises.

The internal construction of a green data center requires the support of a series of services, such as cloud services, cable TV services, Internet services, colocation services, and data protection security services. Of course, many enterprises or carriers have equipped their data centers with cloud services. Some enterprises may also need to rely on other carriers to provide Internet and related services.

According to market trends, the global green data center market is worth around $59.32 billion in 2021 and is expected to grow at a CAGR of 23.5% in the future to 2026. It also shows that the transition to renewable energy sources is accelerating because of the growth of green data centers.

As the growing demand for data storage drives the modernization of data centers, it also places higher demands on power and cooling systems. On the one hand, data centers need to convert non-renewable energy into electricity to generate electricity, resulting in rising electricity costs; on the other hand, some enterprises need to complete the construction of cooling facilities and server cleaning through a lot of water, all of which are ample opportunities for the green data center market. For example, Facebook and Amazon continue to expand their businesses, which has also increased the need for data storage of global companies. These enterprises need a lot of data to complete the analysis of potential customers, but these data processing needs will require a lot of energy. Therefore, the realization of green data centers has become an urgent need for enterprises to solve these problems, and this can also bring more other benefits to enterprises.

Green Data Center Benefits

The green data center concept has grown rapidly in the process of enterprise data center development. Many businesses prefer alternative energy solutions for their data centers, which can bring many benefits to the business. The benefits of green data centers are as follows.

Energy Saving

Green data centers are designed not only to conserve energy, but also to reduce the need for expensive infrastructure to handle cooling and power needs. Sustainable or renewable energy is an abundant and reliable source of energy that can significantly reduce power usage efficiency (PUE). The reduction of PUE enables enterprises to use electricity more efficiently. Green data centers can also use colocation services to decrease server usage, lower water consumption, and reduce the cost of corporate cooling systems.

Cost Reduction

Green data centers use renewable energy to reduce power consumption and business costs through the latest technologies. Shutting down servers that are being upgraded or managed can also help reduce energy consumption at the facility and control operating costs.

Environmental Sustainability

Green data centers can reduce the environmental impact of computing hardware, thereby creating data center sustainability. The ever-increasing technological development requires the use of new equipment and technologies in modern data centers, and the power consumption of these new server devices and virtualization technologies reduces energy consumption, which is environmentally sustainable and brings economic benefits to data center operators.

Enterprise Social Image Enhancement

Today, users are increasingly interested in solving environmental problems. Green data center services help businesses resolve these issues quickly without compromising performance. Many customers already see responsible business conduct as a value proposition. Enterprises, by meeting compliance, regulatory requirements and regulations of the corresponding regions through the construction of green data centers, improve the image of their own social status.

Reasonable Use of Resources

In an environmentally friendly way, green data centers can allow enterprises to make better use of various resources such as electricity, physical space, and heat, integrating the internal facilities of the data center. It promotes the efficient operation of the data center while achieving rational utilization of resources.

5 Ways to Create a Green Data Center

After talking about the benefits of a green data center, then how to build a green data center. Here are a series of green data center solutions.

  • Virtualization extension: Enterprises can build a virtualized computer system with the help of virtualization technology, and run multiple applications and operating systems through fewer servers, thereby realizing the construction of green data centers.
  • Renewable energy utilization: Enterprises can opt for solar panels, wind turbines or hydroelectric plants that can generate energy to power backup generators without any harm to the environment.
  • Enter eco mode: Using an Alternating current USPs is one way to switch eco mode. This setup can significantly improve data center efficiency and PUE. Alternatively, enterprises can reuse equipment, which not only saves money, but also eliminates unnecessary emissions from seeping into the atmosphere.
  • Optimized cooling: Data center infrastructure managers can introduce simple and implementable cooling solutions, such as deploying hot aisle/cold aisle configurations. Data centers can further accelerate cooling output by investing in air handlers and coolers, and installing economizers that draw outside air from the natural environment to build green data center cooling systems.
  • DCIM and BMS systems: DCIM software and BMS software can help data centers managers identify and document ways to use more efficient energy, helping data centers become more efficient and achieve sustainability goals.

Conclusion

Data center sustainability means reducing energy/water consumption and carbon emissions to offset increased computing and mobile device usage to keep business running smoothly. The development of green data centers has become an imperative development trend, and it also caters to the green goals of global environmental protection. As a beneficiary, enterprises can not only save operating costs, but also effectively reduce energy consumption. This is also an important reason for the construction of green data centers.

Article Source: Why Green Data Center Matters

Related Articles:

Data Center Infrastructure Basics and Management Solutions

What Is a Data Center?

PCI vs PCI Express: What’s the Difference?

Share

PCI Vs PCI Express are two different versions of internal bus standards for connecting or injecting peripheral devices into equipment like computers, network servers. But do you know about their relations? And could you tell the differences in PCI Vs PCI Express? To figure out these questions, an exploration for PCI and PCI Express will be introduced in this post.

What Does PCI Vs PCI Express Stands for?

What Is PCI?

PCI, also called peripheral component interconnect, is a connection interface standard developed by Intel in 1990. Originally, it was only used in servers. Later on from 1995 to 2005, the PCI was widely implemented in computer and other network equipment like network switch. Most commonly, PCI is used as the PCI-based expansion card to insert into the PCI slot in a motherboard of a host or server. In the expansion card market, the popular PCI expansion cards are NIC card or network interface card, graphics card, and sound card.

What Is PCI Express?
PCI Express Network Card

Figure 1: PCI Express Network Card

PCI Express, also abbreviated as PCIe, refers to the peripheral component interconnect express. As the successor of PCI, PCI Express is also a type of connection standard carried out by Intel in 2001, which provides more bandwidth and is more compatible with existing operating systems than PCI. Similar like PCI, PCIe also can be used as expansion cards like PCIe Ethernet card to insert into PCI Express slot.

Comparison of PCI Vs PCI Express

As the replacement of PCI, PCI Express differs with it in several aspects, such as working topology and bandwidth. In this part, a brief comparison of PCI Vs PCI Express will be made.

PCI Vs PCI Express in Working Topology: PCI is a parallel connection, and devices connected to the PCI bus appear to be a bus master to connect directly to its own bus. While PCIe card is a high-speed serial connection. Instead of one bus that handles data from multiple sources, PCIe has a switch that controls several point-to-point serial connections.

PCI Vs PCI Express

Figure 2: PCI Vs PCI Express

PCI Vs PCI Express in Bandwidth: Generally, the fixed widths for PCI are 32-bit and 64-bit versions, running at 33 MHz or 66 MHz. 32 bits with 33 MHz, the potential bandwidth is 133 MB/s, 266 MB/s for 66 MHz, and 532 MB/s for 64 bits with 66 MHz. As for PCIe card, the bandwidth varies from 250 MB/s to several GB/s per lane, depending on its card size and version. For more detail, you can refer to the post: PCIe Card Tutorial: What Is PCIe Card and How to Choose It?

PCI Vs PCI Express in Others: With PCI Express, a maximum of 32 end-point devices can be connected. And they support hot plugging. While hot-plugging function is not available for PCI, it can only support a maximum of 5 devices.

FAQs About PCI Vs PCI Express

1. Is the speed for PCI slower than PCI Express?

Sure, the speed for PCIe is faster than PCI. Take the PCIe x1 as an example, it is at least 118% faster than PCI. It’s more obvious when you compare the PCIe-based video card with a PCI video card, the PCIe video card x16 type is almost 29 times faster than PCI video card.

2. Can PCI cards work in PCIe slots?

The answer is no. PCIe and PCI are not compatible with each other due to their different configurations. In most cases, there are both PCI and PCIe slots on the motherboard, so please fit the card into its matching slot and do not misuse the two types.

3. What is a PCIe slot?

PCIe slot refers to the physical size of PCI Express. By and large, there are four slot types: x16, x8, x4, and x1. The more the slot number, the longer the PCIe will be. For example, PCIe x1 is 25 mm in length, while PCIe x16 is 89 mm.

Summary

In this post, we make a comparison in PCI Vs PCI Express from their origin, working mode to their bandwidth, etc. In the final part, there are several frequently asked questions listed for your information. Hope this post will give you some inspiration in telling PCI Vs PCI Express.

What Is SWDM4 and 100G SWDM4 Transceiver?

Share

With the promotion of OM5 multimode fiber (MMF) and the large-scale deployment of 40G or 100G data center transmission network, SWDM technology has gradually entered people’s field of vision and has begun to be applied. Then, what is SWDM4? What is 100G SWDM4 transceiver? What are the advantages of them? Look at the text below to get all the answers.

What Is SWDM4?

To begin with, you should know what the SWDM is before knowing anything about SWDM4, right? Then, what is SWDM?

Actually, SWDM, whose whole full name is short wavelength division multiplex, is a new multi-vendor technology that promises to provide the lowest total cost solution for enterprise data centers upgrading to 40G and 100G Ethernet with the existing 10G duplex OM3/OM4 MMF infrastructure. What’s more, it can cost-effectively increase bandwidth density for new data center builds and extend the reach when used with OM5 wideband multimode fiber (WBMMF) as well. By the way, OM5 fiber also future-proofs the infrastructure for possible future 200G, 400G and 800G interfaces.

To upgrade data centers to 40G/100G Ethernet without changing the existing duplex MMF infrastructure being used for 10G Ethernet, pluggable optical transceivers with SWDM technology matters a lot. This approach consists of multiple vertical-cavity surface-emitting lasers (VCSELs) operating at different wavelengths in the 850nm window (where MMF is optimized). The four-wavelength implementation of SWDM is called SWDM4, and these four wavelengths (850, 880, 910 and 940 nm) are multiplexed/demultiplexed inside a transceiver module into a pair of MMFs (one fiber in each direction, i.e., a standard duplex interface). Each of the four wavelengths operates at either 10G or 25G, enabling the transmission of 40G (4 x 10G) or 100G (4 x 25G) Ethernet over existing duplex MMF, using standard LC connectors.

Four SWDM4 wavelengths defined by SWDM MSA

What is 100G SWDM4 Transceiver?

SWDM4 transceivers can deliver 40G and 100G connections in the same way a standard SFP+ transceiver connects, using duplex LC OM3 or OM4 cabling. Here, we will focus on the 100G connections. You may know something about 100G transceiver, then, what about 100G SWDM4 transceiver?

Actually, from the name, it is easy to tell that a 100G SWDM4 transceiver is a 100G transceiver featuring SWDM4 technology. It provides 100Gbps bandwidth over a standard duplex MMF, eliminating the need for expensive parallel MMF infrastructure. And it offers a seamless migration path from duplex 10G to 100G.

According to 100G SWDM4 MSA Technical Specifications, a 100G SWDM4 QSFP28 transceiver can be used for links up to 75m of OM3 fiber or up to 100m of OM4 fiber. The Tx port transmits 100G data over 4 x 25Gbps wavelengths, and the Rx port receives data over 4 x 25Gbps wavelengths. The wavelengths are in the “short wavelength” range (from 850nm to 940nm). Of course, you can use the advanced OM5 fiber operating only over two fibers to get better experience (up to 150m) with a higher price as well.

Block Diagram of a 100G SWDM4 QSFP28 Transceiver

Advantages of  a 100G SWDM4 Transceiver

Here are several benefits from using the SWDM4 in 100G environments with MMF:

  • Cost-Effective: It uses two fibers (duplex) instead of eight (SR4), enabling significant fiber infrastructure capex savings.
  • OM5 Supported: It supports links up to 150m over OM5 MMF with only two fibers.
  • Easy Migration to 100G: It enables seamless migrations from both 10G and 40G to 100G without major changes to the fiber infrastructure. It works on legacy OM3 or OM4 duplex MMF as well. The widely deployed 10G-SR, 40G-BiDi and 40G-Universal optics all operate over a single pair of MMF with regular LC termination. So does the 100G-SWDM4 transceiver. Therefore, users don’t need to change the existing cabling or re-terminate.
  • Familiar Tap Modules: It can be tapped using existing 1 x 2 Tap modules just like 10G-SR and 40G-Universal optics with no change or replacements, avoiding additional cost and complexity.

Conclusion

From all the above, you may have a general understanding of the three concepts: SWDM, SWDM4 and 100G SWDM4 transceiver. Given the advantages concerning above, SWDM technology and 100G SWDM4 transceivers might be dominant trends in the near future. Maybe you can keep an eye on it for future network construction. By the way, FS.COM offers a variety of 100G optical modules for you to choose from, such as PSM4, CWDM4, etc.

Related Articles:

Wideband Multimode Fiber: What to Expect From It?

25G Ethernet – How It Develops and What’s the Future of It?

Share

Have you ever heard of 25G Ethernet? It is a hot topic which is often mentioned these days. Then, what is it and how it develops? What’s the future of it? Let’s find all the answers together in the following text focusing on the developing process of 25G Ethernet.

What Is 25G Ethernet? Why Does It Appear?

25G Ethernet, or 25 Gigabit Ethernet, is a standard for Ethernet network connectivity in a data center environment. It is developed by IEEE P802.3by 25 Gb/s Ethernet Task Force. The IEEE 802.3by standard uses technology defined for 100 Gigabit Ethernet implemented as four 25 Gbps lanes (IEEE 802.3bj).

25G Ethernet to 100G

In addition to 10, 40 and 100GbE networking, 25G Ethernet technology continues to innovate and lay a path to higher networking speeds. Then, you may ask why it appears since we already have 40G. As you may know that 40GbE technology has evolved over the years and has gained some momentum as an option for enterprises, service providers and cloud providers. However, since the underlying technology for 40G Ethernet is simply four lanes at 10G speed, it does not offer the advantages in power consumption reduction when upgrading to 100G, which 25G can offer.

25G Ethernet can provide a simpler path to Ethernet speeds of 50Gbps, 100Gbps and beyond. With 25G, network operators are no longer forced to use a 40G QSFP port to go from one individual device to another to achieve 100G throughput.

Development of 25G Ethernet

Year 2014 – 25G Was First Introduced

The 25G Ethernet can be dated back to 2014. This is the year when 25G was first put forward. At that time, its cost and efficiency were discussed by a wide range of vendors when compared with 10G, 40G, and 100G. Some well-known hyper-scale data center and cloud computing providers such as Google, Microsoft, Broadcom, Arista, Mellanox, etc. have formed a special research group, namely 25G Ethernet Consortium, to explore the standardization of 25G Ethernet and promote the development of it.

Year 2015 – The First Batch of 25G Products Appeared

Stepping into the second year on 25G Ethernet exploration, the Consortium had a deeper and more comprehensive analysis of it. Researchers were conducting analysis of 25G Ethernet from various aspects, such as its demanding trends in data centers, advantages and applications, Q&As people may concern, etc. With the deepening of exploration, the standardization of 25G Ethernet has gradually taken shape, and suppliers have great expectations for the development of it.

As the initiators of the 25G Ethernet Consortium, Broadcom, Mellanox and Arista have stepped ahead of time and planned to launch their products for 25G development. Broadcom was ramping up production of its “Tomahawk” switch ASICs, and Mellanox had announced its Spectrum ASICs as well as adapter cards to support 25 Gb/s, 50 Gb/s, and 100 Gb/s speeds on servers. While, Arista joins list of vendors that are supporting the new 25G Ethernet standards with its three new switches, the 7060X, 7260X and 7320X, that support both 25 and 50 Gigabit Ethernet.

Year 2016-2017 – Fast Development of 25G

These two years have significant meaning for 25G Ethernet development. During the years, the IEEE approved the 802.3by specification for 25G Ethernet and other major suppliers are rushing to launch their own 25G products to comply with the market trend. 25G Ethernet has more practical applications in the data center.

802.3by specification for 25G Ethernet

In 2016, Marvell introduced industry’s most optimized 25GbE end-to-end data center solution with its newest Prestera switches and Alaska Ethernet transceivers. And Finisar introduced 25G Ethernet optics for high speed data centers with its SFP28 eSR transceiver enabling 300-meter links over existing OM3 MMF, and 25G SFPwire, an Active Optical Cable (AOC) with embedded technology that provides real-time troubleshooting and link performance monitoring as well. In addition, major server vendors including Dell, HPE, and Lenovo have 25G network adapters solutions. And as one of the members of 25G Ethernet Consortium, Mellanox offered SN2100 with 16-port 100G half rack width and can be used as 64-port 25G with breakout cables.

In 2017, 25G was recognized as the industry standard for next-generation server access rates. The related technical specifications such as 25G ToR switches and AOC cables are urgently needed to be finalized, and global organizations are actively competing to take the initiative. At that time, China’s ODCC (Open Data Center Committee) first introduced the 25G ToR switch specification and released details, which had become an important force in the rapid rise of 25G Ethernet.

As companies offer more and more different types of 25G SFP28 transceivers, DACs, and AOCs, the call for 25G Ethernet construction is getting higher and higher.

Year 2018 Till Now – Competition Against Other Network Products

2018 is a year of competition between 25G products and other products. During the year, sales of 10G products declined slightly. At the same time, 25G products received more and more recognition. In 2018, Supermicro opened path to 100G networking with new 25G Ethernet server and storage solutions. It offers a wide range of 25G NIC solutions that empower customers to future-proof nearly any Supermicro system by equipping it with 25G Ethernet networking technology. What’s more, Supermicro also offers a 25G switch (SBM-25G-100) with the X11 SuperBlade. This switch has twenty 25G downlink connections, four QSFP28 ports where each port can be configured as 40G or 100G uplink connections.

In any case, the arrival of 25G and its impact have given everyone confidence that data centers and suppliers can’t wait to plan for the era of 100G, 200G or even 400G.

How Far Can 25G Ethernet Go?

From all the above, you may have a general understanding of how 25G develops. At present, 25G is mainly used for switch-to-server applications. And it indeed gains ground in some aspects compared to 10G and 40G Ethernet. What’s more, you can see a clear trends of 25G market with a recent five-year forecast by industry analysts at the Dell’Oro Group below.

25G five year forecast

For a long run, it will go further since 25G switch offers a more convenient way to migrate to 100G or even 400G network.

Related Articles:

25G Switch Comparison: How to Choose the Suitable One?

Taking an In-depth Look at 25G SFP28

How to Choose A Suitable Power Over Ethernet Switch?

Share

As is known to us all, a Gigabit Ethernet switch is always a popular choice for network users given its lower price and relatively good function. However, you may be aware of the trend that an increasing number of network users are likely to buy a power over Ethernet switch (PoE switch) in recent years. Since it has many advantages and can be used in different applications. For example, it supports power and data transmission over one Ethernet cable at the same time which dramatically simplify the cabling process and cut network cost. Then, here comes the question: how to choose a suitable power over Ethernet switch? Are there any buying tips? Next, let’s find the answers together.

power over Ethernet switch applications

What Type of Power Over Ethernet Switch Should I Buy?

Normally, there are three types of power over Ethernet switches, namely unmanaged PoE switch, managed PoE switch and smart PoE switch. And the managed switches are the most popular ones in actual applications.

An unmanaged switch is the most basic form of a network switch. Normally, an unmanaged PoE switch only allows your devices to connect with one another. It is best suited for home and small office uses. If a business handles sensitive information such as an accounting firm or a bank, such switch is not recommended. An unmanaged switch is the most basic form of a network switch.

Contrary to an unmanaged PoE switch, a managed one offers full management capabilities and security features. It can be configured and properly managed to offer a more tailored experience. It can help you monitor the network and control overall traffic. Such switch is usually used in enterprise networks and data centers.

While, a smart PoE switch (or hybrid PoE switch), is a switch that has partial functions of a managed one. It enables you to configure ports and set up virtual networks, but doesn’t allow network monitoring, troubleshooting, or remote accessing. It is usually used in business applications such as VoIP and smaller networks.

Other Main Factors on Buying A Power Over Ethernet Switch

In addition to choosing from different types mentioned above, you have many other things to consider when buying a power over Ethernet switch. Such as the following aspects:

  • Port Numbers: Normally, network switches have different port numbers such as 8-port PoE switch, 24-port PoE switch, etc. The larger the network, the greater number of ports you’ll need. It is better to choose a switch that has more interfaces than you actually need.
  • Maximum Power Supply: The maximal power supply of your PoE switch matters as well. If it is less than the overall power needed from your powered devices (IP cameras, for example), then the PoE switch won’t provide enough power for all your PoE IP cameras and the insufficient power supply may cause poor device performance like video loss.
  • Maximum Power Consumption: You can estimate the power consumption of all your powered devices (PDs) in advance to see if your power over Ethernet switch can support. Normally, there are two types of PoE standards, namely IEEE802.3af and IEEE802.3at. IEEE802.3af could provide up to 12.95W of DC power on each PD (power loss due to network cables has been counted in) while IEEE802.3at can pump out up to 25.5W. PDs are only suitable for IEEE 802.3at PoE standard when their power draw is between 12.95-25.5W.
  • Forwarding Rate: Switches have different processing capabilities with different rates at which they process data per second. Data forwarding rates is very important when selecting a switch. For a Gigabit PoE switch, a normal Gigabit Ethernet port attains a rate of 1Gbps. That is to say, a 48-port PoE switch operating at full wire speed generates 48Gbps of traffic. If the switch only supports a forwarding rate of 32Gbps, it can not run at full wire speed across all ports simultaneously.
  • Technical Support: You can consider whether the power over Ethernet switch provider offers a local support team or not to support you if you have any problem in configuring the switch or other issues.

Conclusion

From all the above, you may have a general understanding of how to choose a suitable power over Ethernet switch. You can decide which type of switch you need first, and then add additional needs such as port numbers, maximum power supply, maximum power consumption, forwarding rate, etc. to help you get the most appropriate switch you want.

Related Articles:

Why You Need a Managed 8 Port PoE Switch

Power over Ethernet Switch Explained: Why Choose PoE switch over PoE Injector?

Why Should You Use A Managed Switch With PoE?

Share

Nowadays, managed PoE switches are getting more and more popular among network users. Many people are likely to choose a managed switch with PoE function rather than an unmanaged one. Why does this appear? Are there special reasons? Look at this post to learn why you should use a managed switch with PoE as well as the difference between an unmanaged PoE switch and managed PoE switch.

What Is A Managed Switch?

You may know that network switch can be divided into two types in management level, namely managed switch and unmanaged switch. Then, what is a managed switch? What’s the difference between unmanaged vs. managed switch?

Actually, a managed switch is a switch that allows access to one or more interfaces for the purpose of configuration or management of features such as Spanning Tree Protocol (STP), port speed, VLANs, etc. It can give you more control over your LAN traffic and offer advanced features to control that traffic. For example, the FS S5800-48F4S 10GbE switch, supporting MLAG, VxLAN, SNMP, etc.

FS S5800-48F4S 10GbE switch

On the contrary, an unmanaged switch just simply allows Ethernet devices to communicate with one another, such as a PC or network printer. It is shipped with a fixed configuration and do not allow any changes to this configuration.

Advantages of A Managed Switch

Normally, a managed switch is always better than an unmanaged one since it can provide all the features of an unmanaged switch. Compared with an unmanaged switch, a managed one has the the advantages such as administrative controls, networking monitoring, limited communication for unauthorized devices, etc.

What Is PoE? Why Should You Use A Managed Switch With PoE?

From the introduction above, you may be aware of the importance of a managed switch. Then, why should you use a managed switch with PoE? Do you know what a managed PoE switch is?

What Is PoE?

Actually, PoE means power over Ethernet. The main advantage or feature of PoE is delivery of data and power at the same time over one Cat5e or Cat6 Ethernet cable. It ends the need for AC or DC power supplies and outlets. What’s more, a remote installation costs less than fiber as no electrician is required.

Why Should You Use A Managed Switch With PoE?

PoE is not recommended for sending network data over long distances, or for extreme temperatures unless industrial designation is present. It is often seen to be used in a Gigabit Ethernet switch, and it is mainly used with IP cameras, VoIP phones and WAP (wireless access points). These are the reasons why you should use a managed switch with PoE. Here, let’s take FS 8-port Gigabit PoE+ managed switch as an example.

FS 8-port Gigabit PoE+ managed switch

The FS 8-port Gigabit PoE+ managed switch can offer you cost-effective and efficient PoE solution for business. As you can see from the following picture and video, if you need to connect to NVR for better surveillance network building or for IP camera consideration, such a managed PoE switch is an ideal choice.

Application layout of a managed switch with PoE

Conclusion

With all the illustration above, you may have a general understanding of what a managed PoE switch is and why you should use it in certain circumstances. A managed switch with PoE not only includes all the functions that a managed switch has, but also enables you to transfer data and power at the same time over one Cat5e or Cat6 Ethernet cable.

Related Article:

Why You Need a Managed 8 Port PoE Switch

FS.COM PoE Switch Solution

How to Build Affordable 10G Network for Small and Midsize Business?

Share

With the fast development of today’s networking field, many people tend to build 10G network in small and midsize business for their growing network needs. Then, why they choose 10G network? How to build an affordable one? If you want to build such a network, what things you should know? Don’t worry. Let’s find all the answers in the following text.

Necessity of 10G network

Actually, the necessity of 10G network is quite simple to understand. As time goes on, there will be more traffic and applications running on your existing networks and they will keep growing. At that time, the common used Gigabit network will no longer satisfy the urgent needs for higher networking speeds and larger network construction.

How to Build An Affordable 10G Network?

To build a 10G network, there are several indispensable components you need, such as 10GbE switch (10G core switch and access switch with 10G uplinks), 10G SFP+ modules, fiber cables, severs and storage devices, etc.

10G network layout

To build an affordable 10G network for small and midsize business (SMB), let’s take fiber cabling solution as an example.

Fiber Cabling Solution for 10G Network

Under such circumstance, the server or storage has 10G SFP+ port. And it is suitable for applications matching with a 10G fiber switch as the core switch. You can connect all the devices with the steps below:

Step 1: Connect Server Or Storage to A Core Switch

For connection between server (or storage) and a core switch, you can insert a 10G transceiver module connecting with one end of a LC cable into the server or storage, and then connect the other end of the LC cable with the core switch.

Here, the transceiver we use is 10G SFP+ module provided by FS.COM. It can reach a maximum cable distance of 300m over OM3 multimode fiber (MMF).

The LC cable we use is LC UPC to LC UPC duplex OM3 MMF, which has less attenuation when bent or twisted compared with traditional optical fiber cables and will make the installation and maintenance of the fiber optic cables more efficient.

What’s more, the core switch we use is FS S5850-48S2Q4C. This network switch is a 48-port 10Gb SFP+ L2/L3 carrier grade switch with 6 hybrid 40G/100G uplink ports. It is a high performance top of rack (ToR) or leaf switch to meet the next generation metro, data center and enterprise network requirements.

Step 2: Connect the Core Switch With An Access Switch

Next, you need to connect the core switch with an access switch. Just like step 1, insert a 10G transceiver module connecting with one end of a LC cable into the core switch, and then connect the other end of the LC cable with the access switch.

Here, we use FS Gigabit Ethernet switch with 10G SFP+ uplink as the access switch. This is a fanless switch, which is suitable for quilt requirement in SMB network. In addition, it has 24 10/100/1000BASE-T ports and 4 10Gb SFP+ ports for uplinks.

And the LC cable and 10G transceiver we use are the same as the products used in step 1.

Step 3: Connect Your Access Switch to Computers

After the previous two steps, you can use Cat5 or Cat5e cable (here we use Cat5e) to connect your access switch with computers or other devices you need to use. Just remember that you have to connect the 10/100/1000BASE-T ports rather than the 10Gb SFP+ ports.

Products
Price
Features
From US$16.00
Supports 8 Gbit/s Fibre Channel, 10 Gigabit Ethernet and Optical Transport Network standard OTU2.
From US$1.4 to 5.3 for 1m
OM3 10Gb 50/125 multimode fiber
US$5,699.00
48 x 10Gb + 2 x 40Gb + 4 x 100Gb ports; Non-blocking bandwidth up to 960Gbps
US$279.00
24 x 100/1000BASE-T + 4 x 10GB SFP+ ports; Switching capacity up to 128Gbps
Start from US$0.82 for 6in
Shielded (STP) or Unshielded (UTP) Cat5e Ethernet network patch cable (24/26AWG, 100MHz, RJ45 connector)

Conclusion

From all the above, you may get clearer about how to build affordable 10G network for small and midsize business with 10GbE switch, fiber cables, Ethernet cables, etc. As long as you use the right way, you can not only build an affordable 10G network but also a powerful network for future network reconstruction.

Related Articles:

How to Build a 10G Home Fiber Network?

How to Build 10GbE Network for Small and Mid-Sized Business?