400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

Share
400G

400G ZR and ZR+ coherent pluggable optics have become new solutions for high-density networks with data rates from 100G to 400G featuring low power and small space. Let’s see how the latest generation of 400G ZR and 400G ZR+ optics extends the economic benefits to meet the requirements of network operators, maximizes fiber utilization, and reduces the cost of data transport.

400G ZR & ZR+: Definitions

What Is 400G ZR?

400G ZR coherent optical modules are compliant with the OIF-400ZR standard, ensuring industry-wide interoperability. They provide 400Gbps of optical bandwidth over a single optical wavelength using DWDM (dense wavelength division multiplexing) and higher-order modulation such as 16 QAM. Implemented predominantly in the QSFP-DD form factor, 400G ZR will serve the specific requirement for massively parallel data center interconnect of 400GbE with distances of 80-120km. To learn more about 400G transceivers: How Many 400G Transceiver Types Are in the Market?

Overview of 400G ZR+

ZR+ is a range of coherent pluggable solutions with line capacities up to 400Gbps and reaches well beyond 80km supporting various application requirements. The specific operational and performance requirements of different applications will determine what types of 400G ZR+ coherent plugs will be used in networks. Some applications will take advantage of interoperable, multi-vendor ecosystems defined by standards body or MSA specifications and others will rely on the maximum performance achievable in the constraints of a pluggable module package. Four categories of 400G ZR+ applications will be explained in the following part.

400G ZR & ZR+: Applications

400G ZR – Application Scenario

The arrival of 400G ZR modules has ushered in a new era of DWDM technology marked by open, standards based, and pluggable DWDM optics, enabling true IP-over-DWDM. 400G ZR is often applied for point-to-point DCI (up to 80km), making the task of interconnecting data centers as simple as connecting switches inside a data center (as shown below).

Figure 1: 400G ZR Applied in Single-span DCI

Four Primary Deployment Applications for 400G ZR+

Extended-reach P2P Packet

One definition of ZR+ is a straightforward extension of 400G ZR transcoded mappings of Ethernet with a higher performance FEC to support longer reaches. In this case, 400G ZR+ modules are narrowly defined as supporting a single-carrier 400Gbps optical line rate and transporting 400GbE, 2x 200GbE or 4x 100GbE client signals for point-to-point reaches (up to around 500km). This solution is specifically dedicated to packet transport applications and destined for router platforms.

Multi-span Metro OTN

Another definition of ZR+ is the inclusion of support for OTN, such as client mapping and multiplexing into FlexO interfaces. This coherent pluggable solution is intended to support the additional requirements of OTN networks, carry both Ethernet and OTN clients, and address transport in multi-span ROADM networks. This category of 400G ZR+ is required where demarcation is important to operators, and is destined primarily for multi-span metro ROADM networks.

Figure 2: 400G ZR+ Applied in Multi-span Metro OTN

Multi-span Metro Packet

The third definition of ZR+ is support for extended reach Ethernet or packet transcoded solution that is further optimized for critical performance such as latency. This 400G ZR+ coherent pluggable with high performance FEC and sophisticated coding algorithms supports the longest reach over 1000km multi-span metro packet transport.

Figure 3: 400G ZR+ Applied in Multi-span Metro Packet

Multi-span Metro Regional OTN

The fourth definition of ZR+ supports both Ethernet and OTN clients. This coherent pluggable also leverages high performance FEC and PCS, along with tunable optical filters and amplifiers for maximum reach. It supports a rich feature set of OTN network functions for deployment over both fixed and flex-grid line systems. This category of 400G ZR+ provides solutions with higher performance to address a much wider range of metro/regional packet networking requirements.

400G ZR & ZR+: What Makes Them Suitable for Longer-reach Transmission in Data Center?

Coherent Technology Adopted by 400G ZR & ZR+

Coherent technology uses the three degrees of freedom (amplitude, phase and polarization of light) to focus more data on the wave that is being transmitted. In this way, coherent optics can transport more data over a single fiber for greater distances using higher order modulation techniques, which results in better spectral efficiency. 400G ZR and ZR+ is a leap forward in the application of coherent technology. With higher-order modulation and DWDM unlocking high bandwidth, 400G ZR and ZR+ modules can reduce cost and complexity for high-level data center interconnects.

Importance of 400G ZR & ZR+

400G ZR and 400G ZR+ coherent pluggable optics take implementation challenges to the next level by adding some of the elements for high-performance solutions while pushing component design for low-power, pluggability, and modularity.

Conclusion

Although there are still many challenges to making 400G ZR and 400G ZR+ transceiver modules that fit into the small size and power budget of OSFP or QSFP-DD packages and also achieving interoperation as well the costs and volume targets. With 400Gbps high optical bandwidth and low power consumption, 400G ZR & ZR+ may very well be the new generation in longer-reach optical communications.

Original Source: 400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

400G OSFP Transceiver Types Overview

Share
400G

OSFP stands for Octal Small Form-factor Pluggable, which consists of 8 electrical lanes, running at 50Gb/s each, for a total of the bandwidth of 400Gb/s. This post will give an introduction of 400G OSFP transceiver types, the fiber connections, and some QAs about OSFP.

400G OSFP Transceiver Types

Below lists some current main 400G OSFP transceiver types: OSFP SR8, OSFP DR4, OSFP DR4+, OSFP FR4, OSFP 2*FR4, and OSFP LR4, which summarize OSFP transceiver according to the two transmission types (over multimode fiber and single-mode fiber) they support.

Fibers Connections for 400G OSFP Transceivers

400G OSFP SR8

Figure 1 OSFP SR8 to OSFP SR8.jpg
  • 400G OSFP SR8 to 2× 200G SR4 over MTP-16 to 2× MPO-8 breakout cable.
Figure 2 OSFP SR8 to 2 200G SR4.jpg
  • 400G OSFP SR8 to 8× 50G SFP via MTP-16 to 8× LC duplex breakout cable with up to 100m.
Figure 3 OSFP SR8 to 8 50G SFP.jpg

400G OSFP DR4

  • 400G OSFP DR4 to 400G OSFP DR4 over an MTP-12/MPO-12 cable.Figure 1 OSFP SR8 to OSFP SR8.jpg
  • 400G OSFP DR4 to 4× 100G DR4 over MTP-12/MPO-12 to 4× LC duplex breakout cable.
Figure 4 OSFP DR4 to 4 100G DR.jpg

400G OSFP XDR4/DR4+

  • 400G OSFP DR4+ to 400G OSFP DR4+ over an MTP-12/MPO-12 cable.
  • 400G OSFP DR4+ to 4× 100G DR over MTP-12/MPO-12 to 4× LC duplex breakout cable.
Figure 5 OSFP DR4+ to 4 100G DR.jpg

400G OSFP FR4

400G OSFP FR4 to 400G OSFP FR4 over duplex LC cable.

Figure 6 OSFP FR4 to OSFP FR4.jpg

400G OSFP 2FR4

OSFP 2FR4 can break out to 2× 200G and interop with 2× 200G-FR4 QSFP transceivers via 2× CS to 2× LC duplex cable.

400G OSFP Transceivers: Q&A

Q: What does “SR8”, “DR4”, “XDR4”, “FR4”, and “LR4” mean?

A: “SR” refers to short range, and “8” implies there are 8 optical channels. “DR” refers to 500m reach using single-mode fiber, and “4” implies there are 4 optical channels. “XDR4” is short for “eXtended reach DR4”. And “LR” refers to 10km reach using single-mode fiber.

Q: Can I plug an OSFP transceiver module into a QSFP-DD port?

A: No. QSFP-DD and OSFP are totally different form factors. For more information about QSFP-DD transceivers, you can refer to 400G QSFP-DD Transceiver Types Overview. You can use only one kind of form factor in the corresponding system. E.g., if you have an OSFP system, OSFP transceivers and cables must be used.

Q: Can I plug a 100G QSFP28 module into an OSFP port?

A: Yes. A QSFP28 module can be inserted into an OSFP port but with an adapter. When using a QSFP28 module in an OSFP port, the OSFP port must be configured for a data rate of 100G instead of 400G.

Q: What other breakout options are possible apart from using OSFP modules mentioned above?

A: OSFP 400G DACs & AOCs are possible for breakout 400G connections. See 400G Direct Attach Cables (DAC & AOC) Overview for more information about 400G DACs & AOCs.

Original Source: 400G OSFP Transceiver Types Overview

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Share

The COVID-19 pandemic caused several companies to shut down, and the implications were reduced production and altered supply chains. In the tech world, where silicon microchips are the heart of everything electronic, raw material shortage became a barrier to new product creation and development.

During the lockdown periods, some essential workers were required to stay home, which meant chip manufacturing was unavailable for several months. By the time lockdown was lifted and the world embraced the new normal, the rising demand for consumer and business electronics was enough to ripple up the supply chain.

Below, we’ve discussed the challenges associated with the current chip shortage, what to expect moving forward, and the possible interventions necessary to overcome the supply chain constraints.

Challenges Caused by the Current Chip Shortage

As technology and rapid innovation sweeps across industries, semiconductor chips have become an essential part of manufacturing – from devices like switches, wireless routers, computers, and automobiles to basic home appliances.

To understand and quantify the impact this chip shortage has caused spanning the industry, we’ll need to look at some of the most affected sectors. Here’s a quick breakdown of how things have unfolded over the last eighteen months.

Automobile Industry

in North America and Europe had slowed or stopped production due to a lack of computer chips. Major automakers like Tesla, Ford, BMW, and General Motors have all been affected. The major implication is that the global automobile industry will manufacture 4 million fewer cars by the end of 2021 than earlier planned, and it will forfeit an average of $110 billion in revenue.

Consumer Electronics

Consumer electronics such as desktop PCs and smartphones rose in demand throughout the pandemic, thanks to the shift to virtual learning among students and the rise in remote working. At the start of the pandemic, several automakers slashed their vehicle production forecasts before abandoning open semiconductor chip orders. And while the consumer electronics industry stepped in and scooped most of those microchips, the supply couldn’t catch up with the demand.

Data Centers

Most chip fabrication companies like Samsung Foundries, Global Foundries, and TSMC prioritized high-margin orders from PC and data center customers during the pandemic. And while this has given data centers a competitive edge, it isn’t to say that data centers haven’t been affected by the global chip shortage.

Some of the components data centers have struggled to source include those needed to put together their data center switching systems. These include BMC chips, capacitors, resistors, circuit boards, etc. Another challenge is the extended lead times due to wafer and substrate shortages, as well as reduced assembly capacity.

LED Lighting

LED backlights common in most display screens are powered by hard-to-find semiconductor chips. The prices of gadgets with LED lighting features are now highly-priced due to the shortage of raw materials and increased market demand. This is expected to continue up to the beginning of 2022.

Renewable Energy- Solar and Turbines

Renewable energy systems, particularly solar and turbines, rely on semiconductors and sensors to operate. The global supply chain constraints have hurt the industry and even forced some energy solutions manufacturers like Enphase Energy to

Semiconductor Trends: What to Expect Moving Forward

In response to the global chip shortage, several component manufacturers have ramped up production to help mitigate the shortages. However, top electronics and semiconductor manufacturers say the crunch will only worsen before it gets better. Most of these industry leaders speculate that the semiconductor shortage could persist into 2023.

Based on the ongoing disruption and supply chain volatility, various analysts in a recent CNBC article and Bloomberg interview echoed their views, and many are convinced that the coming year will be challenging. Here are some of the key takeaways:

Pat Gelsinger, CEO of Intel Corp., noted in April 2021 that the chip shortage would recover after a couple of years.

DigiTimes Report found that Intel and AMD server ICs and data centers have seen their lead times extend to 45 to 66 weeks.

The world’s third-largest EMS and OEM provider, Flex Ltd., expects the global semiconductor shortage to proceed into 2023.

In May 2021, Global Foundries, the fourth-largest contract semiconductor manufacturer, signed a $1.6 billion, 3-year silicon supply deal with AMD, and in late June, it launched its new $4 billion, 300mm-wafer facility in Singapore. Yet, the company says its production capacity will only increase component production earliest in 2023.

TMSC, one of the leading pure-play foundries in the industry, says it won’t meaningfully increase the component output until 2023. However, it’s optimistic that the company will ramp up the fabrication of automotive micro-controllers by 60% by the end of 2021.

From the industry insights above, it’s evident that despite the many efforts that major players put into resolving the global chip shortage, the bottlenecks will probably persist throughout 2022.

Additionally, some industry observers believe that the move by big tech companies such as Amazon, Microsoft, and Google to design their own chips for cloud and data center business could worsen the chip shortage crisis and other problems facing the semiconductor industry.

article, the authors hint that the entry of Microsoft, Amazon, and Google into the chip design market will be a turning point in the industry. These tech giants have the resources to design superior and cost-effective chips of their own, something most chip designers like Intel have in limited proportions.

Since these tech giants will become independent, each will be looking to create component stockpiles to endure long waits and meet production demands between inventory refreshes. Again, this will further worsen the existing chip shortage.

Possible Solutions

To stay ahead of the game, major industry players such as chip designers and manufacturers and the many affected industries have taken several steps to mitigate the impacts of the chip shortage.

For many chip makers, expanding their production capacity has been an obvious response. Other suppliers in certain regions decided to stockpile and limit exports to better respond to market volatility and political pressures.

Similarly, improving the yields or increasing the number of chips manufactured from a silicon wafer is an area that many manufacturers have invested in to boost chip supply by some given margin.

Here are the other possible solutions that companies have had to adopt:

Embracing flexibility to accommodate older chip technologies that may not be “state of the art” but are still better than nothing.

Leveraging software solutions such as smart compression and compilation to build efficient AI models to help unlock hardware capabilities.

LED Lighting

The latest global chip shortage has led to severe shocks in the semiconductor supply chain, affecting several industries from automobile, consumer electronics, data centers, LED, and renewables.

Industry thought leaders believe that shortages will persist into 2023 despite the current build-up in mitigation measures. And while full recovery will not be witnessed any time soon, some chip makers are optimistic that they will ramp up fabrication to contain the demand among their automotive customers.

That said, staying ahead of the game is an all-time struggle considering this is an issue affecting every industry player, regardless of size or market position. Expanding production capacity, accommodating older chip technologies, and leveraging software solutions to unlock hardware capabilities are some of the promising solutions.

Added

This article is being updated continuously. If you want to share any comments on FS switches, or if you are inclined to test and review our switches, please email us via media@fs.com or inform us on social media platforms. We cannot wait to hear more about your ideas on FS switches.

Article Source: The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Related Articles:

Impact of Chip Shortage on Datacenter Industry

Impact of Chip Shortage on Datacenter Industry

Share

As the global chip shortage let rip, many chip manufacturers have to slow or even halt semiconductor production. Makers of all kinds of electronics such as switches, PCs, servers are all scrambling to get enough chips in the pipeline to match the surging demand for their products. Every manufacturer, supplier and solution provider in datacenter industry is feeling the impact of the ongoing chip scarcity. However, relief is nowhere in sight yet.

What’s Happening?

Due to the rise of AI and cloud computing, datacenter chips have been a highly charged topic in recent times. As networking switches and modern servers, indispensable equipment in datacenter applications, use more advanced components than an average consumer’s PC, naturally when it comes to chip manufacturers and suppliers, data centers are given the top priority. However, with the demand for data center machines far outstripping supply, chip shortages may continue to be pervasive across the next few years. Coupled with economic uncertainties caused by the pandemic, it further puts stress on datacenter management.

According to a report from the Dell’Oro Group, robust datacenter switch sales over the past year could foretell a looming shortage. As the mismatch in supply and demand keeps growing, enterprises looking to buy datacenter switches face extended lead times and elevated costs over the course of the next year.

“So supply is decreasing and demand is increasing,” said Sameh Boujelbene, leader of the analyst firm’s campus and data-center research team. “There’s a belief that things will get worse in the second half of the year, but no consensus on when it’ll start getting better.”

Back in March, Broadcom said that more than 90% of its total chip output for 2021 had already been ordered by customers, who are pressuring it for chips to meet booming demand for servers used in cloud data centers and consumer electronics such as 5G phones.

“We intend to meet such demand, and in doing so, we will maintain our disciplined process of carefully reviewing our backlog, identifying real end-user demand, and delivering products accordingly,” CEO Hock Tan said on a conference call with investors and analysts.

Major Implications

Extended Lead Times

Arista Networks, one of the largest data center networking switch vendors and a supplier of switches to cloud providers, foretells that switch-silicon lead times will be extended to as long as 52 weeks.

“The supply chain has never been so constrained in Arista history,” the company’s CEO, Jayshree Ullal, said on an earnings call. “To put this in perspective, we now have to plan for many components with 52-week lead time. COVID has resulted in substrate and wafer shortages and reduced assembly capacity. Our contract manufacturers have experienced significant volatility due to country specific COVID orders. Naturally, we’re working more closely with our strategic suppliers to improve planning and delivery.”

Hock Tan, CEO of Broadcom, also acknowledged on an earnings call that the company had “started extending lead times.” He said, “part of the problem was that customers were now ordering more chips and demanding them faster than usual, hoping to buffer against the supply chain issues.”

Elevated Cost

Vertiv, one of the biggest sellers of datacenter power and cooling equipment, mentioned it had to delay previously planned “footprint optimization programs” due to strained supply. The company’s CEO, Robert Johnson, said on an earnings call, “We have decided to delay some of those programs.”

Supply chain constraints combined with inflation would cause “some incremental unexpected costs over the short term,” he said, “To share the cost with our customers where possible may be part of the solution.”

“Prices are definitely going to be higher for a lot of devices that require a semiconductor,” says David Yoffie, a Harvard Business School professor who spent almost three decades serving on the board of Intel.

Conclusion

There is no telling that how the situation will continue playing out and, most importantly, when supply and demand might get back to normal. Opinions vary on when the shortage will end. The CEO of chipmaker STMicro estimated that the shortage will end by early 2023. Intel CEO Patrick Gelsinger said it could last two more years.

As a high-tech network solutions and services provider, FS has been actively working with our customers to help them plan for, adapt to, and overcome the supply chain challenges, hoping that we can both ride out this chip shortage crisis. At least, we cannot lose hope, as advised by Bill Wyckoff, vice president at technology equipment provider SHI International, “This is not an ‘all is lost’ situation. There are ways and means to keep your equipment procurement and refresh plans on track if you work with the right partners.”

Article Source: Impact of Chip Shortage on Datacenter Industry

Related Articles:

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Data Center Infrastructure Basics and Management Solutions

Share

Datacenter infrastructure refers to all the physical components in a data center environment. These physical components play a vital role in the day-to-day operations of a data center. Hence, data center management challenges are an urgent issue that IT departments need to pay attention to. On the one hand, it is to improve the energy efficiency of the data center; on the other hand, it is to know about the operating performance of the data center in real-time ensuring its good working condition and maintaining enterprise development.

Data Center Infrastructure Basics

The standard for data center infrastructure is divided into four tiers, each of which consists of different facilities. They mainly include cabling systems, power facilities, cooling facilities, network infrastructure, storage infrastructure, and computing resources.

There are roughly two types of infrastructure inside a data center: the core components and IT infrastructure. Network infrastructure, storage infrastructure, and computing resources belong to the former, while cooling equipment, power, redundancy, etc. belong to the latter.

Core Components

Network, storage, and computing systems are vital infrastructures for data centers to achieve sharing access to applications and data, providing data centers with shared access to applications and data. Also, they are the core components of data centers.

Network Infrastructure

Datacenter network infrastructure is a combination of network resources, consisting of switches, routers, load balancing, analytics, etc., to facilitate the storage and processing of applications and data. Modern data center networking architectures, through using full-stack networking and security virtualization platforms that support a rich set of data services, can achieve connecting everything from VMs, containers, and bare-metal applications, while enabling centralized management and fine-grained security controls.

Storage Infrastructure

Datacenter storage is a general term for the tools, technologies and processes for designing, implementing, managing and monitoring storage infrastructure and resources in data centers, mainly referring to the equipment and software technologies that implement data and application storage in data center facilities. These include hard drives, tape drives and other forms of internal and external storage and backup management software utilities external storage facilities/solutions.

Computing Resources

A data center meter is a memory and processing power to run applications, usually provided by high-end servers. In the edge computing model, the processing and memory used to run applications on servers may be virtualized, physical, distributed among containers or distributed among remote nodes.

IT Infrastructure

As data centers become critical to enterprise IT operations, it is equally important to keep them running efficiently. When designing data center infrastructure, it is necessary to evaluate its physical environment, including cabling system, power system, cooling system to ensure the security of the physical environment of the data center.

Cabling Systems

The integrated cabling is an important part of data center cable management, supporting the connection, intercommunication and operation of the entire data center network. The system is usually composed of copper cables, optical cables, connectors and wiring equipment. The application of the data center integrated wiring system has the characteristics of high density, high performance, high reliability, fast installation, modularization, future-oriented, and easy application.

Power Systems

Datacenter digital infrastructure requires electricity to operate. Even an interruption of a fraction of a second will result in a significant impact. Hence, power infrastructure is one of the most critical components of a data center. The data center power chain starts at the substation and ends up through building transformers, switches, uninterruptible power supplies, power distribution units, and remote power panels to racks and servers.

Cooling Systems

Data center servers generate a lot of heat while running. Based on this characteristic, cooling is critical to data center operations, aiming to keep systems online. The amount of power each rack can keep cool by itself places a limit on the amount of power a data center can consume. Generally, each rack can allow the data center to operate at an average 5-10 kW cooling density, but some may be higher.

data center

Data Center Infrastructure Management Solutions

Due to the complexity of IT equipment in a data center, the availability, reliability, and maintenance of its components require more attention. Efficient data center operations can be achieved through balanced investments in facilities and accommodating equipment.

Energy Usage Monitoring Equipment

Traditional data centers lack the energy usage monitoring instruments and sensors required to comply with ASHRAE standards and collect measurement data for use in calculating data center PUE. It results in a poor monitoring environment for the power system of the data center. One measure is to install energy monitoring components and systems on power systems to measure data center energy efficiency. Enterprise teams can implement effective strategies by the measure to balance overall energy usage efficiency and effectively monitor the energy usage of all other nodes.

Cooling Facilities Optimization

Independent computer room air conditioning units used in traditional data centers often have separate controls and set points, resulting in excessive operation due to temperature and humidity adjustments. It’s a good way for helping servers to achieve cooling by creating hot-aisle/cold-aisle layouts to maximize the flow of cold air to the equipment intakes and the hot exhaust air from the equipment racks. The creation of hot or cold aisles can eliminate the mixing of hot and cold air by adding partitions or ceilings.

CRAC Efficiency Improvement

Packaged DX air conditioners likely compose the most common type of cooling equipment for smaller data centers. These units are often described as CRAC units. There are, however, there are several ways to improve the energy efficiency of the cooling system employing DX units. Indoor CRAC units are available with a few different heat rejection options.

  • – As with rooftop units, adding evaporative spray can improve the efficiency of air-cooled CRAC units.
  • – A pre-cooling water coil can be added to the CRAC unit upstream of the evaporator coil. When ambient conditions allow the condenser water to be cooled to the extent that it provides direct cooling benefits to the air entering the CRAC unit, the condenser water is diverted to the pre-cooling coil. This will reduce or sometimes eliminate the need for compressor-based cooling for the CRAC unit.

DCIM

Data center infrastructure management is the combination of IT and operations to manage and optimize the performance of data center infrastructure within an organization. DCIM tools help data center operators monitor, measure, and manage the utilization and energy consumption of data center-related equipment and facility infrastructure components, effectively improving the relationship between data center buildings and their systems.

DCIM enables bridging of information across organizational domains such as data center operations, facilities, and IT to maximize data center utilization. Data center operators create flexible and efficient operations by visualizing real-time temperature and humidity status, equipment status, power consumption, and air conditioning workloads in server rooms.

Preventive Maintenance

In addition to the above management and operation solutions for infrastructure, unplanned maintenance is also an aspect to consider. Unplanned maintenance typically costs 3-9 times more than planned maintenance, primarily due to overtime labor costs, collateral damage, emergency parts, and service calls. IT teams can create a recurring schedule to perform preventive maintenance on the data center. Regularly checking the infrastructure status and repairing and upgrading the required components promptly can keep the internal infrastructure running efficiently, as well as extend the lifespan and overall efficiency of the data center infrastructure.

Article Source: Data Center Infrastructure Basics and Management Solutions

Related Articles:

Data Center Migration Steps and Challenges

What Are Data Center Tiers?

Understanding Data Center Redundancy

Share

Maximizing the uptime should be the top priority for every data center, be they small or hyperscale. To keep your data center constantly running, a plan for redundancy systems is a must.

What Is Data Center Redundancy?

Data center redundancy refers to a system design where critical components such as UPS units, cooling systems and backup generators are duplicated so that data center operations can continue even if a component fails. For example, a redundant UPS system starts working when a power outage happens. In the event of downtime due to hazardous weather, power outages, or component failures, data center backup components play their role to keep the whole system running.

Why Is Data Center Redundancy Important?

It is imperative for businesses to increase uptime and recover more quickly from downtime, whether unexpected or planned. Downtime hurts business. It can have serious and direct impact on brand images, business operations, and customer experience, resulting in devastating financial losses, missed business opportunities and a tarnished reputation. Even for small businesses, unscheduled downtime can still cost hundreds of dollars per minute.

Redundancy configuration in data centers helps cut the risk of downtime, thus reducing losses caused by undesired impacts. A well-planned redundancy design means shorter potential downtime in the long run. Moreover, redundant components also ensure that data is safe and secure as data center operations keep working and never fail.

Redundancy is also a crucial factor in gauging data center reliability, performance and availability. The Uptime Institute offers a tier classification system that certifies data centers according to four distinct tiers—Tier 1, Tier 2, Tier 3 and Tier 4. Each tier has strict and specific requirements around data center redundancy level.

Different Levels of Redundancy

There is no one-size-fits-all redundancy design. Lower levels of redundancy mean increased potential downtime in the long run. While more redundancy will result in less downtime but increased costs of maintaining the redundant components. however, if your business model requires as little downtime as possible, this is often justifiable in terms of profit and overall net growth. To choose the right configuration for your business, it is important to recognize the capabilities and risks of different redundancy models, including N, N+1, N+X, 2N, 2N+1, and 3N/2.

N Model

N equals the amount of capacity required to power, backup, or cool a facility at full IT load. It can represent the units that you want to duplicate such as a generator, UPS, or cooling unit. For example, if a data center requires three UPS units to operate at full capacity, N would equal three.

An architecture of N means the facility is designed only to keep a data center running at full capacity. Simply put, N is the same as zero redundancy. If the data center facility is at full load and there is a hardware failure, scheduled maintenance, or an unexpected outage, mission-critical applications would suffer. With an N design, any interruption would leave your business unable to access your data until the issue is resolved.

N+1 or N+X Model

An N+1 redundancy model provides a minimal level of resiliency by adding a single component—a UPS, HVAC system or generator—to the N architecture to support a failure and maintain a full workload. When one system is offline, the extra component takes over the load. Going back to the previous example, if N equals three UPS units, N+1 provides four. Likewise, an N+2 redundancy design provides two extra components. In our example, N+1 provides five UPS units instead of four. So N+X provides N+X extra components to reduce risks in the event of multiple simultaneous failures.

2N Model

2N redundancy creates a mirror image of the original UPS, cooling system or generators to provide full fault tolerance. It means if three UPS units are necessary to support full capacity, this redundancy model would include an additional set of three UPS units, for a total of six systems. This design also utilizes two independent distribution systems.

With a 2N model, data center operators can take down an entire set of components for maintenance without affecting normal operations. Moreover, in the event of unscheduled multiple component failures, the additional set takes over to maintain full capacity. The resiliency of this model greatly cuts the risks of downtime.

2N+1 Model

If 2N means full fault tolerance, 2N+1 delivers the fully fault-tolerant 2N model plus an extra component for extra protection. Not only can this model withstand multiple component failures, even in a worst-case scenario when the entire primary system is offline, it can still sustain N+1 redundancy. For its high level of reliability, this redundancy model is generally used by businesses that cannot tolerate even minor service disruptions.

N+1 2N+1 redundancy

3N/2 Model

The three-to-make-two or 3N/2 redundant model refers to a redundancy methodology where additional capacity is based on the load of the system. If we consider a 3N/2 scenario, three power delivery systems will power two servers, which means each power delivery system utilizes 67% of the available capacity. Likewise, in a 4N/3, there will be four power delivery systems powering three workloads (three servers). The 3N/2 could be upgraded to 4N/3, but only in theory. This is because such an elaborate model has so many components that it would be very difficult to manage and balance loads to maintain redundancy.

3N/2 redundancy

What’s the Right One for You?

Choosing a redundant model that meets your business needs can be challenging. Finding the right balance between reliability and cost is the key. For businesses that require as little downtime as possible, higher levels of redundancy are justifiable in terms of profit and overall net growth. For those that do not, lower levels of redundancy are acceptable. They are cheaper and more energy-efficient than the other more sophisticated redundancy designs.

In a word, there’s no right or wrong redundancy model because it depends on a range of factors like your business goals, budget, and IT environment. Consult your data center provider or discuss with your IT team to figure out the best option for you.

Article Source: Understanding Data Center Redundancy

Related Articles:

What Are Data Center Tiers?

Data Center UPS: Deployments & Buying Guide

5 Factors to Consider for Data Center Environmental Monitoring

Share

What Are Data Center Environmental Standards

Data center environmental monitoring is vital for device operations. The data center architecture is divided into four layers where the equipment placed inside also affects the design of data center environmental standards.

  • Tier I defines data center standards for facilities with minimal redundancy.
  • Tier II provides redundant critical power and cooling components.
  • Tier III adds redundant transmission paths for power and cooling to redundant critical components.
  • Tier IV infrastructure is built on Tier Ⅲ and adds the concept of fault tolerance to infrastructure topology.

Enterprises must comply with fairly stringent environmental standards to ensure these facilities remain functional.

Evolution of Data Center Environmental Standards

As early as the 1970s and 1980s, data center environmental monitoring revolved around power facilities. For example, whether the environment where the power supply was located has proper isolation, whether the main power supply affected the operation of the overall equipment, but the cooling problem was rarely monitored. Some enterprises have explored cooling technologies to facilitate cooling in data centers, such as liquid cooling. Typically, enterprises used loud fans to control airflow. In some countries, the cost of electricity was high, so there was a greater emphasis on being able to supply enough electricity for a given system configuration.

In the 1990s, the power density of the rack became a considered issue of enterprise data center environmental standards. In the past, a simple power factor calculation could yield the required cooling value for a data center, but accurate cooling values could not be provided by the increasing rack densities. At this point, enterprises had to re-plan the airflow patterns of data center racks and equipment. This required IT managers to know more statistics when designing a data center, such as pressure drop, air velocity, and flow resistance.

By the early 20th century, power densities were still increasing, and thermal modeling was seen as a potential answer to optimizing the cooling of data center environments. The lack of necessary data implied that temperature data typically was collected after data center construction and then IT managers needed to make adjustments based on that information. Enterprises should choose the correct thermal model of equipment when building a data center to enhance data center environmental monitoring. Here are several environmental control methods when building a data center.

5 Factors in Data Center Environmental Controls

For ensuring the reliable operation of IT equipment within a rack, the primary concerns for monitoring and controlling data center environmental conditions are temperature, humidity, static electricity, physical and human safety. Moreover, data center environmental impact resulted from these factors not only on the ecological environment but also on data center security, energy efficiency, and the enterprise social image.

Temperature Control

Thermal control is always a challenging issue for data centers, as servers emit heat when they are running. If they are paralyzed by overheating, it will cripple data center operations. Temperature control can check if equipment is operating within the recommended temperature range. A temperature sensor is an effective method to solve temperature control. Placing them in strategic locations and reading the overall temperature allows IT managers to conduct temperature control promptly.

Humidity Control

Humidity control is closely related to temperature levels. High humidity can corrode hardware. Low humidity levels can cause electrostatic arcing problems. For this reason, cooling and ventilation systems need to detect and control the relative humidity in the room air. ASHRAE recommends operation within a dew point range of 41.9 to 59 degrees Fahrenheit with a maximum relative humidity of 60%. Datacenter designers need to invest in systems that can detect humidity and water near equipment to better monitor cooling fans and measure the presence of airflow during routine management. Of course, it is also possible to use a set of computer room air conditioner(CRAC) units on larger facilities to create consistent airflow that flows throughout the room. These CRAC systems typically work by drawing in and cooling heat, then expelling it as cool air through vents and air intakes leading to the server.

Electricity Monitoring

Static electricity is also one of the threats in the data center environment, it is an invisible nuisance. Some newer IT components can be damaged or completely fried by less than 25 volts of discharge. If this problem isn’t addressed, it might result in frequent disconnections, system crashes, and even data corruption. Unexpected bursts of energy in the form of electrostatic discharges may be the greatest threat to the performance of the average data center. To prevent such incidents, businesses must install energy monitors that are strategically located to detect the buildup of static electricity.

Fire Suppression

A comprehensive fire suppression system is a must-have feature in data center environmental standards. If an entire data center is to be protected from disaster, data center designers need to take security measures from fire and fire suppression systems to physical and virtual systems. Fire suppression systems are subject to regular testing and active monitoring of the data center to ensure that they will indeed do their job in the clutch.

Security Systems

Data security of data center environmental standards is also very important. IT departments must institute a limit that keeps intruders away from buildings as well as server rooms and the racks they are in. Setting up a complete range of physical security is a desirable method—from IP surveillance systems to advanced sensors. If unauthorized personnel is detected entering a building or server rack, it will alert data center managers.

Summary

The purpose of data center environmental monitoring is to provide a better operating environment for facilities and avoid some unplanned cases that affect the business of enterprises. For the above data center environmental controls, it is beneficial for enterprises to maintain data center security when designing data centers, which is conducive to data center management. Also, it properly controls the data center environmental impact on ecology and energy efficiency.

Article Source: 5 Factors to Consider for Data Center Environmental Monitoring

Related Articles:

Things You Should Know About Data Center Power

What Is Data Center Security?

What Is Edge Computing?

Share

In today’s data-driven world, where businesses rely on real-time information to make critical decisions, edge computing has become the technology of choice. By moving some portion of compute, and storage resources from a central data center and closer to the data source, latency issues, bandwidth limitations, and network disruptions are greatly minimized.

With edge computing, data produced in a factory floor or retail store are processed and analyzed at the network’s edge and within the premises. Since data doesn’t travel across networks, speed is one obvious advantage. This translates to instant analysis of data, faster response by site personnel, and real-time decision-making.

How Edge Computing Works

Edge computing brings computing power closer to the data source, where sensors and other data capturing instruments are located. The entire edge computing process takes place inside intelligent devices that speed up the processing of the various data collected before the devices connect to the IoT.

The goal of edge computing is to boost efficiency. Instead of sending all the data collected by sensors to the enterprise applications for processing, edge devices do the computing and only send important data for further analysis or storage. This is possible thanks to edge AI, i.e., artificial intelligence at the edge.

After the edge devices do the computation of the data with the help of edge AI, these devices group the data collected or results obtained into different categories. The three basic categories are:

  • Data that doesn’t need further action and shouldn’t be stored or transmitted to enterprise applications.
  • Data that should be retained for further analysis or record keeping.
  • Data that requires an immediate response.

The work of edge computing is to discriminate between these data sets and identify the level of response and the action required, then act on it accordingly.

edge computing

Depending on the compute power of the edge device and the complexity of the data collected, the device may work on the outlier data and provide a real-time response. Or send it to the enterprise application for further analysis in real-time with immediate retrieval of the results. Since only the important and urgent data sets are sent over the network, there’s reduced bandwidth requirement. This results in substantial cost savings, especially with wireless cellular networks.

Why Edge Computing?

There are several reasons why edge computing is winning the popularity battle in the enterprise computing world. Digital transformation initiatives, from robotics & advanced automation to AI and data analytics, all have one thing in common – they are largely data-dependent. Most industries that leverage these technologies are also time-sensitive, meaning the data they produce becomes irrelevant in a matter of minutes, if not seconds.

The large amounts of data currently produced by IoT devices strain a shared computing 8model due to system congestion and network disruption. This results in huge financial losses, injuries, and costly damages for time and disruption-sensitive applications. The attractiveness of edge computing often narrows down to the three network challenges it seeks to solve. These are:

Latency – a lag in the communication between devices and network delays decision-making in time-sensitive applications. Edge computing solves this problem using a more distributed network, which ensures there’s no disconnect in real-time information transfer and processing. This gives a more reliable and consistent network.

Bandwidth – Every network has a limited bandwidth, especially wireless communications. Edge computing solves bandwidth limitations by processing immense volumes of data near the network’s edge then only sending the most relevant information through the network. This minimizes the volume of data that requires a cellular connection.

Data Compliance and Governance – organizations that handle sensitive data, are subject to data regulations of various countries. By processing this set of data near the source, these companies can keep the sensitive customer/employee data within their borders, hence ensuring compliance.

Edge Computing Use Cases

Over the years, edge data centers have found several use cases across industries, thanks to rapid tech adoption and the benefits of processing data at the network edge. Ideally, any application that requires moving large amounts of data to a centralized data center before retrieving the result and insights could benefit a lot from edge computing. Below are the different ways several industries use edge computing in their day-to-day operations:

Transportation – autonomous vehicles produce around 5 to 20 terabytes of data daily from information about speed, location, traffic conditions, road conditions, etc. This data must be organized, processed, and analyzed in real-time, and insights fed into the system while the vehicle is on the road. This time-sensitive application requires accurate, reliable, and consistent onboard computing.

Manufacturing – several manufacturers now deploy edge computing to monitor manufacturing processes and enable real-time analytics. By coupling this with machine learning and AI, edge computing can help streamline manufacturing processes with real-time insights, predictive analytics, and more.

Farming – indoor farming relies on different sensors that collect a wide range of data that must be processed and analyzed to gain insights into the crops’ health, weather conditions, nutrient density, etc. Edge computing makes this data processing and insights generation faster, hence faster response and decision making.

The other areas where edge computing has been adopted include healthcare facilities to help patients avoid health issues in real-time and retail to optimize vendor ordering and predict sales.

Edge Computing Challenges

Edge computing isn’t without its challenges, and some of the common ones revolve around security and data lifecycles. Applications that rely on IoT devices are vulnerable to data breaches, which could comprise security at the edge. As far as data lifecycles are concerned, the challenge comes in with the large amount of data stored at the network’s edge. A ton of useless data may take up critical space; hence businesses should keenly choose the data to keep and discard.

Edge computing also relies on some level of connectivity, and the typical network limitations are another cause for concern. It’s, therefore, necessary to plan for connectivity problems and design an edge computing deployment that can accommodate common networking issues.

Implementing Edge Computing

Regardless of the industry you are in, edge computing comes with several benefits, but only if it’s designed well and deployed to solve the challenges common with centralized data centers. To get the most from your investment, you want to work with a reputed edge computing company or an expert IT consultant to guide you on the best way forward.

Article Source: What Is Edge Computing?

Related Articles:

Edge Computing vs. Multi-Access Edge Computing

Micro Data Center and Edge Computing

Why Green Data Center Matters

Share

Background

Green data centers appear in the concept of enterprise construction, due to the continuous growth of new data storage requirements and the steady enhancement of green environmental protection awareness. Newly retained data must be protected, cooled, and transferred efficiently. This means that the huge energy demands of data centers present challenges in terms of cost and sustainability, and enterprises are increasingly concerned about the energy demands of their data centers. It can be seen that sustainable and renewable energy resources have become the development trend of green data centers.

Green Data Center Is a Trend

A green data center is a facility similar to a regular data center that hosts servers to store, manage, and disseminate data. It is designed to minimize environmental impact by providing maximum energy efficiency. Green data centers have the same characteristics as typical data centers, but the internal system settings and technologies can effectively reduce energy consumption and carbon footprints for enterprises.

The internal construction of a green data center requires the support of a series of services, such as cloud services, cable TV services, Internet services, colocation services, and data protection security services. Of course, many enterprises or carriers have equipped their data centers with cloud services. Some enterprises may also need to rely on other carriers to provide Internet and related services.

According to market trends, the global green data center market is worth around $59.32 billion in 2021 and is expected to grow at a CAGR of 23.5% in the future to 2026. It also shows that the transition to renewable energy sources is accelerating because of the growth of green data centers.

As the growing demand for data storage drives the modernization of data centers, it also places higher demands on power and cooling systems. On the one hand, data centers need to convert non-renewable energy into electricity to generate electricity, resulting in rising electricity costs; on the other hand, some enterprises need to complete the construction of cooling facilities and server cleaning through a lot of water, all of which are ample opportunities for the green data center market. For example, Facebook and Amazon continue to expand their businesses, which has also increased the need for data storage of global companies. These enterprises need a lot of data to complete the analysis of potential customers, but these data processing needs will require a lot of energy. Therefore, the realization of green data centers has become an urgent need for enterprises to solve these problems, and this can also bring more other benefits to enterprises.

Green Data Center Benefits

The green data center concept has grown rapidly in the process of enterprise data center development. Many businesses prefer alternative energy solutions for their data centers, which can bring many benefits to the business. The benefits of green data centers are as follows.

Energy Saving

Green data centers are designed not only to conserve energy, but also to reduce the need for expensive infrastructure to handle cooling and power needs. Sustainable or renewable energy is an abundant and reliable source of energy that can significantly reduce power usage efficiency (PUE). The reduction of PUE enables enterprises to use electricity more efficiently. Green data centers can also use colocation services to decrease server usage, lower water consumption, and reduce the cost of corporate cooling systems.

Cost Reduction

Green data centers use renewable energy to reduce power consumption and business costs through the latest technologies. Shutting down servers that are being upgraded or managed can also help reduce energy consumption at the facility and control operating costs.

Environmental Sustainability

Green data centers can reduce the environmental impact of computing hardware, thereby creating data center sustainability. The ever-increasing technological development requires the use of new equipment and technologies in modern data centers, and the power consumption of these new server devices and virtualization technologies reduces energy consumption, which is environmentally sustainable and brings economic benefits to data center operators.

Enterprise Social Image Enhancement

Today, users are increasingly interested in solving environmental problems. Green data center services help businesses resolve these issues quickly without compromising performance. Many customers already see responsible business conduct as a value proposition. Enterprises, by meeting compliance, regulatory requirements and regulations of the corresponding regions through the construction of green data centers, improve the image of their own social status.

Reasonable Use of Resources

In an environmentally friendly way, green data centers can allow enterprises to make better use of various resources such as electricity, physical space, and heat, integrating the internal facilities of the data center. It promotes the efficient operation of the data center while achieving rational utilization of resources.

5 Ways to Create a Green Data Center

After talking about the benefits of a green data center, then how to build a green data center. Here are a series of green data center solutions.

  • Virtualization extension: Enterprises can build a virtualized computer system with the help of virtualization technology, and run multiple applications and operating systems through fewer servers, thereby realizing the construction of green data centers.
  • Renewable energy utilization: Enterprises can opt for solar panels, wind turbines or hydroelectric plants that can generate energy to power backup generators without any harm to the environment.
  • Enter eco mode: Using an Alternating current USPs is one way to switch eco mode. This setup can significantly improve data center efficiency and PUE. Alternatively, enterprises can reuse equipment, which not only saves money, but also eliminates unnecessary emissions from seeping into the atmosphere.
  • Optimized cooling: Data center infrastructure managers can introduce simple and implementable cooling solutions, such as deploying hot aisle/cold aisle configurations. Data centers can further accelerate cooling output by investing in air handlers and coolers, and installing economizers that draw outside air from the natural environment to build green data center cooling systems.
  • DCIM and BMS systems: DCIM software and BMS software can help data centers managers identify and document ways to use more efficient energy, helping data centers become more efficient and achieve sustainability goals.

Conclusion

Data center sustainability means reducing energy/water consumption and carbon emissions to offset increased computing and mobile device usage to keep business running smoothly. The development of green data centers has become an imperative development trend, and it also caters to the green goals of global environmental protection. As a beneficiary, enterprises can not only save operating costs, but also effectively reduce energy consumption. This is also an important reason for the construction of green data centers.

Article Source: Why Green Data Center Matters

Related Articles:

Data Center Infrastructure Basics and Management Solutions

What Is a Data Center?

PCI vs PCI Express: What’s the Difference?

Share

PCI Vs PCI Express are two different versions of internal bus standards for connecting or injecting peripheral devices into equipment like computers, network servers. But do you know about their relations? And could you tell the differences in PCI Vs PCI Express? To figure out these questions, an exploration for PCI and PCI Express will be introduced in this post.

What Does PCI Vs PCI Express Stands for?

What Is PCI?

PCI, also called peripheral component interconnect, is a connection interface standard developed by Intel in 1990. Originally, it was only used in servers. Later on from 1995 to 2005, the PCI was widely implemented in computer and other network equipment like network switch. Most commonly, PCI is used as the PCI-based expansion card to insert into the PCI slot in a motherboard of a host or server. In the expansion card market, the popular PCI expansion cards are NIC card or network interface card, graphics card, and sound card.

What Is PCI Express?
PCI Express Network Card

Figure 1: PCI Express Network Card

PCI Express, also abbreviated as PCIe, refers to the peripheral component interconnect express. As the successor of PCI, PCI Express is also a type of connection standard carried out by Intel in 2001, which provides more bandwidth and is more compatible with existing operating systems than PCI. Similar like PCI, PCIe also can be used as expansion cards like PCIe Ethernet card to insert into PCI Express slot.

Comparison of PCI Vs PCI Express

As the replacement of PCI, PCI Express differs with it in several aspects, such as working topology and bandwidth. In this part, a brief comparison of PCI Vs PCI Express will be made.

PCI Vs PCI Express in Working Topology: PCI is a parallel connection, and devices connected to the PCI bus appear to be a bus master to connect directly to its own bus. While PCIe card is a high-speed serial connection. Instead of one bus that handles data from multiple sources, PCIe has a switch that controls several point-to-point serial connections.

PCI Vs PCI Express

Figure 2: PCI Vs PCI Express

PCI Vs PCI Express in Bandwidth: Generally, the fixed widths for PCI are 32-bit and 64-bit versions, running at 33 MHz or 66 MHz. 32 bits with 33 MHz, the potential bandwidth is 133 MB/s, 266 MB/s for 66 MHz, and 532 MB/s for 64 bits with 66 MHz. As for PCIe card, the bandwidth varies from 250 MB/s to several GB/s per lane, depending on its card size and version. For more detail, you can refer to the post: PCIe Card Tutorial: What Is PCIe Card and How to Choose It?

PCI Vs PCI Express in Others: With PCI Express, a maximum of 32 end-point devices can be connected. And they support hot plugging. While hot-plugging function is not available for PCI, it can only support a maximum of 5 devices.

FAQs About PCI Vs PCI Express

1. Is the speed for PCI slower than PCI Express?

Sure, the speed for PCIe is faster than PCI. Take the PCIe x1 as an example, it is at least 118% faster than PCI. It’s more obvious when you compare the PCIe-based video card with a PCI video card, the PCIe video card x16 type is almost 29 times faster than PCI video card.

2. Can PCI cards work in PCIe slots?

The answer is no. PCIe and PCI are not compatible with each other due to their different configurations. In most cases, there are both PCI and PCIe slots on the motherboard, so please fit the card into its matching slot and do not misuse the two types.

3. What is a PCIe slot?

PCIe slot refers to the physical size of PCI Express. By and large, there are four slot types: x16, x8, x4, and x1. The more the slot number, the longer the PCIe will be. For example, PCIe x1 is 25 mm in length, while PCIe x16 is 89 mm.

Summary

In this post, we make a comparison in PCI Vs PCI Express from their origin, working mode to their bandwidth, etc. In the final part, there are several frequently asked questions listed for your information. Hope this post will give you some inspiration in telling PCI Vs PCI Express.