Tag Archives: data center

Impact of Chip Shortage on Datacenter Industry

Share

As the global chip shortage let rip, many chip manufacturers have to slow or even halt semiconductor production. Makers of all kinds of electronics such as switches, PCs, servers are all scrambling to get enough chips in the pipeline to match the surging demand for their products. Every manufacturer, supplier and solution provider in datacenter industry is feeling the impact of the ongoing chip scarcity. However, relief is nowhere in sight yet.

What’s Happening?

Due to the rise of AI and cloud computing, datacenter chips have been a highly charged topic in recent times. As networking switches and modern servers, indispensable equipment in datacenter applications, use more advanced components than an average consumer’s PC, naturally when it comes to chip manufacturers and suppliers, data centers are given the top priority. However, with the demand for data center machines far outstripping supply, chip shortages may continue to be pervasive across the next few years. Coupled with economic uncertainties caused by the pandemic, it further puts stress on datacenter management.

According to a report from the Dell’Oro Group, robust datacenter switch sales over the past year could foretell a looming shortage. As the mismatch in supply and demand keeps growing, enterprises looking to buy datacenter switches face extended lead times and elevated costs over the course of the next year.

“So supply is decreasing and demand is increasing,” said Sameh Boujelbene, leader of the analyst firm’s campus and data-center research team. “There’s a belief that things will get worse in the second half of the year, but no consensus on when it’ll start getting better.”

Back in March, Broadcom said that more than 90% of its total chip output for 2021 had already been ordered by customers, who are pressuring it for chips to meet booming demand for servers used in cloud data centers and consumer electronics such as 5G phones.

“We intend to meet such demand, and in doing so, we will maintain our disciplined process of carefully reviewing our backlog, identifying real end-user demand, and delivering products accordingly,” CEO Hock Tan said on a conference call with investors and analysts.

Major Implications

Extended Lead Times

Arista Networks, one of the largest data center networking switch vendors and a supplier of switches to cloud providers, foretells that switch-silicon lead times will be extended to as long as 52 weeks.

“The supply chain has never been so constrained in Arista history,” the company’s CEO, Jayshree Ullal, said on an earnings call. “To put this in perspective, we now have to plan for many components with 52-week lead time. COVID has resulted in substrate and wafer shortages and reduced assembly capacity. Our contract manufacturers have experienced significant volatility due to country specific COVID orders. Naturally, we’re working more closely with our strategic suppliers to improve planning and delivery.”

Hock Tan, CEO of Broadcom, also acknowledged on an earnings call that the company had “started extending lead times.” He said, “part of the problem was that customers were now ordering more chips and demanding them faster than usual, hoping to buffer against the supply chain issues.”

Elevated Cost

Vertiv, one of the biggest sellers of datacenter power and cooling equipment, mentioned it had to delay previously planned “footprint optimization programs” due to strained supply. The company’s CEO, Robert Johnson, said on an earnings call, “We have decided to delay some of those programs.”

Supply chain constraints combined with inflation would cause “some incremental unexpected costs over the short term,” he said, “To share the cost with our customers where possible may be part of the solution.”

“Prices are definitely going to be higher for a lot of devices that require a semiconductor,” says David Yoffie, a Harvard Business School professor who spent almost three decades serving on the board of Intel.

Conclusion

There is no telling that how the situation will continue playing out and, most importantly, when supply and demand might get back to normal. Opinions vary on when the shortage will end. The CEO of chipmaker STMicro estimated that the shortage will end by early 2023. Intel CEO Patrick Gelsinger said it could last two more years.

As a high-tech network solutions and services provider, FS has been actively working with our customers to help them plan for, adapt to, and overcome the supply chain challenges, hoping that we can both ride out this chip shortage crisis. At least, we cannot lose hope, as advised by Bill Wyckoff, vice president at technology equipment provider SHI International, “This is not an ‘all is lost’ situation. There are ways and means to keep your equipment procurement and refresh plans on track if you work with the right partners.”

Article Source: Impact of Chip Shortage on Datacenter Industry

Related Articles:

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Data Center Infrastructure Basics and Management Solutions

Share

Datacenter infrastructure refers to all the physical components in a data center environment. These physical components play a vital role in the day-to-day operations of a data center. Hence, data center management challenges are an urgent issue that IT departments need to pay attention to. On the one hand, it is to improve the energy efficiency of the data center; on the other hand, it is to know about the operating performance of the data center in real-time ensuring its good working condition and maintaining enterprise development.

Data Center Infrastructure Basics

The standard for data center infrastructure is divided into four tiers, each of which consists of different facilities. They mainly include cabling systems, power facilities, cooling facilities, network infrastructure, storage infrastructure, and computing resources.

There are roughly two types of infrastructure inside a data center: the core components and IT infrastructure. Network infrastructure, storage infrastructure, and computing resources belong to the former, while cooling equipment, power, redundancy, etc. belong to the latter.

Core Components

Network, storage, and computing systems are vital infrastructures for data centers to achieve sharing access to applications and data, providing data centers with shared access to applications and data. Also, they are the core components of data centers.

Network Infrastructure

Datacenter network infrastructure is a combination of network resources, consisting of switches, routers, load balancing, analytics, etc., to facilitate the storage and processing of applications and data. Modern data center networking architectures, through using full-stack networking and security virtualization platforms that support a rich set of data services, can achieve connecting everything from VMs, containers, and bare-metal applications, while enabling centralized management and fine-grained security controls.

Storage Infrastructure

Datacenter storage is a general term for the tools, technologies and processes for designing, implementing, managing and monitoring storage infrastructure and resources in data centers, mainly referring to the equipment and software technologies that implement data and application storage in data center facilities. These include hard drives, tape drives and other forms of internal and external storage and backup management software utilities external storage facilities/solutions.

Computing Resources

A data center meter is a memory and processing power to run applications, usually provided by high-end servers. In the edge computing model, the processing and memory used to run applications on servers may be virtualized, physical, distributed among containers or distributed among remote nodes.

IT Infrastructure

As data centers become critical to enterprise IT operations, it is equally important to keep them running efficiently. When designing data center infrastructure, it is necessary to evaluate its physical environment, including cabling system, power system, cooling system to ensure the security of the physical environment of the data center.

Cabling Systems

The integrated cabling is an important part of data center cable management, supporting the connection, intercommunication and operation of the entire data center network. The system is usually composed of copper cables, optical cables, connectors and wiring equipment. The application of the data center integrated wiring system has the characteristics of high density, high performance, high reliability, fast installation, modularization, future-oriented, and easy application.

Power Systems

Datacenter digital infrastructure requires electricity to operate. Even an interruption of a fraction of a second will result in a significant impact. Hence, power infrastructure is one of the most critical components of a data center. The data center power chain starts at the substation and ends up through building transformers, switches, uninterruptible power supplies, power distribution units, and remote power panels to racks and servers.

Cooling Systems

Data center servers generate a lot of heat while running. Based on this characteristic, cooling is critical to data center operations, aiming to keep systems online. The amount of power each rack can keep cool by itself places a limit on the amount of power a data center can consume. Generally, each rack can allow the data center to operate at an average 5-10 kW cooling density, but some may be higher.

data center

Data Center Infrastructure Management Solutions

Due to the complexity of IT equipment in a data center, the availability, reliability, and maintenance of its components require more attention. Efficient data center operations can be achieved through balanced investments in facilities and accommodating equipment.

Energy Usage Monitoring Equipment

Traditional data centers lack the energy usage monitoring instruments and sensors required to comply with ASHRAE standards and collect measurement data for use in calculating data center PUE. It results in a poor monitoring environment for the power system of the data center. One measure is to install energy monitoring components and systems on power systems to measure data center energy efficiency. Enterprise teams can implement effective strategies by the measure to balance overall energy usage efficiency and effectively monitor the energy usage of all other nodes.

Cooling Facilities Optimization

Independent computer room air conditioning units used in traditional data centers often have separate controls and set points, resulting in excessive operation due to temperature and humidity adjustments. It’s a good way for helping servers to achieve cooling by creating hot-aisle/cold-aisle layouts to maximize the flow of cold air to the equipment intakes and the hot exhaust air from the equipment racks. The creation of hot or cold aisles can eliminate the mixing of hot and cold air by adding partitions or ceilings.

CRAC Efficiency Improvement

Packaged DX air conditioners likely compose the most common type of cooling equipment for smaller data centers. These units are often described as CRAC units. There are, however, there are several ways to improve the energy efficiency of the cooling system employing DX units. Indoor CRAC units are available with a few different heat rejection options.

  • – As with rooftop units, adding evaporative spray can improve the efficiency of air-cooled CRAC units.
  • – A pre-cooling water coil can be added to the CRAC unit upstream of the evaporator coil. When ambient conditions allow the condenser water to be cooled to the extent that it provides direct cooling benefits to the air entering the CRAC unit, the condenser water is diverted to the pre-cooling coil. This will reduce or sometimes eliminate the need for compressor-based cooling for the CRAC unit.

DCIM

Data center infrastructure management is the combination of IT and operations to manage and optimize the performance of data center infrastructure within an organization. DCIM tools help data center operators monitor, measure, and manage the utilization and energy consumption of data center-related equipment and facility infrastructure components, effectively improving the relationship between data center buildings and their systems.

DCIM enables bridging of information across organizational domains such as data center operations, facilities, and IT to maximize data center utilization. Data center operators create flexible and efficient operations by visualizing real-time temperature and humidity status, equipment status, power consumption, and air conditioning workloads in server rooms.

Preventive Maintenance

In addition to the above management and operation solutions for infrastructure, unplanned maintenance is also an aspect to consider. Unplanned maintenance typically costs 3-9 times more than planned maintenance, primarily due to overtime labor costs, collateral damage, emergency parts, and service calls. IT teams can create a recurring schedule to perform preventive maintenance on the data center. Regularly checking the infrastructure status and repairing and upgrading the required components promptly can keep the internal infrastructure running efficiently, as well as extend the lifespan and overall efficiency of the data center infrastructure.

Article Source: Data Center Infrastructure Basics and Management Solutions

Related Articles:

Data Center Migration Steps and Challenges

What Are Data Center Tiers?

Understanding Data Center Redundancy

Share

Maximizing the uptime should be the top priority for every data center, be they small or hyperscale. To keep your data center constantly running, a plan for redundancy systems is a must.

What Is Data Center Redundancy?

Data center redundancy refers to a system design where critical components such as UPS units, cooling systems and backup generators are duplicated so that data center operations can continue even if a component fails. For example, a redundant UPS system starts working when a power outage happens. In the event of downtime due to hazardous weather, power outages, or component failures, data center backup components play their role to keep the whole system running.

Why Is Data Center Redundancy Important?

It is imperative for businesses to increase uptime and recover more quickly from downtime, whether unexpected or planned. Downtime hurts business. It can have serious and direct impact on brand images, business operations, and customer experience, resulting in devastating financial losses, missed business opportunities and a tarnished reputation. Even for small businesses, unscheduled downtime can still cost hundreds of dollars per minute.

Redundancy configuration in data centers helps cut the risk of downtime, thus reducing losses caused by undesired impacts. A well-planned redundancy design means shorter potential downtime in the long run. Moreover, redundant components also ensure that data is safe and secure as data center operations keep working and never fail.

Redundancy is also a crucial factor in gauging data center reliability, performance and availability. The Uptime Institute offers a tier classification system that certifies data centers according to four distinct tiers—Tier 1, Tier 2, Tier 3 and Tier 4. Each tier has strict and specific requirements around data center redundancy level.

Different Levels of Redundancy

There is no one-size-fits-all redundancy design. Lower levels of redundancy mean increased potential downtime in the long run. While more redundancy will result in less downtime but increased costs of maintaining the redundant components. however, if your business model requires as little downtime as possible, this is often justifiable in terms of profit and overall net growth. To choose the right configuration for your business, it is important to recognize the capabilities and risks of different redundancy models, including N, N+1, N+X, 2N, 2N+1, and 3N/2.

N Model

N equals the amount of capacity required to power, backup, or cool a facility at full IT load. It can represent the units that you want to duplicate such as a generator, UPS, or cooling unit. For example, if a data center requires three UPS units to operate at full capacity, N would equal three.

An architecture of N means the facility is designed only to keep a data center running at full capacity. Simply put, N is the same as zero redundancy. If the data center facility is at full load and there is a hardware failure, scheduled maintenance, or an unexpected outage, mission-critical applications would suffer. With an N design, any interruption would leave your business unable to access your data until the issue is resolved.

N+1 or N+X Model

An N+1 redundancy model provides a minimal level of resiliency by adding a single component—a UPS, HVAC system or generator—to the N architecture to support a failure and maintain a full workload. When one system is offline, the extra component takes over the load. Going back to the previous example, if N equals three UPS units, N+1 provides four. Likewise, an N+2 redundancy design provides two extra components. In our example, N+1 provides five UPS units instead of four. So N+X provides N+X extra components to reduce risks in the event of multiple simultaneous failures.

2N Model

2N redundancy creates a mirror image of the original UPS, cooling system or generators to provide full fault tolerance. It means if three UPS units are necessary to support full capacity, this redundancy model would include an additional set of three UPS units, for a total of six systems. This design also utilizes two independent distribution systems.

With a 2N model, data center operators can take down an entire set of components for maintenance without affecting normal operations. Moreover, in the event of unscheduled multiple component failures, the additional set takes over to maintain full capacity. The resiliency of this model greatly cuts the risks of downtime.

2N+1 Model

If 2N means full fault tolerance, 2N+1 delivers the fully fault-tolerant 2N model plus an extra component for extra protection. Not only can this model withstand multiple component failures, even in a worst-case scenario when the entire primary system is offline, it can still sustain N+1 redundancy. For its high level of reliability, this redundancy model is generally used by businesses that cannot tolerate even minor service disruptions.

N+1 2N+1 redundancy

3N/2 Model

The three-to-make-two or 3N/2 redundant model refers to a redundancy methodology where additional capacity is based on the load of the system. If we consider a 3N/2 scenario, three power delivery systems will power two servers, which means each power delivery system utilizes 67% of the available capacity. Likewise, in a 4N/3, there will be four power delivery systems powering three workloads (three servers). The 3N/2 could be upgraded to 4N/3, but only in theory. This is because such an elaborate model has so many components that it would be very difficult to manage and balance loads to maintain redundancy.

3N/2 redundancy

What’s the Right One for You?

Choosing a redundant model that meets your business needs can be challenging. Finding the right balance between reliability and cost is the key. For businesses that require as little downtime as possible, higher levels of redundancy are justifiable in terms of profit and overall net growth. For those that do not, lower levels of redundancy are acceptable. They are cheaper and more energy-efficient than the other more sophisticated redundancy designs.

In a word, there’s no right or wrong redundancy model because it depends on a range of factors like your business goals, budget, and IT environment. Consult your data center provider or discuss with your IT team to figure out the best option for you.

Article Source: Understanding Data Center Redundancy

Related Articles:

What Are Data Center Tiers?

Data Center UPS: Deployments & Buying Guide

5 Factors to Consider for Data Center Environmental Monitoring

Share

What Are Data Center Environmental Standards

Data center environmental monitoring is vital for device operations. The data center architecture is divided into four layers where the equipment placed inside also affects the design of data center environmental standards.

  • Tier I defines data center standards for facilities with minimal redundancy.
  • Tier II provides redundant critical power and cooling components.
  • Tier III adds redundant transmission paths for power and cooling to redundant critical components.
  • Tier IV infrastructure is built on Tier Ⅲ and adds the concept of fault tolerance to infrastructure topology.

Enterprises must comply with fairly stringent environmental standards to ensure these facilities remain functional.

Evolution of Data Center Environmental Standards

As early as the 1970s and 1980s, data center environmental monitoring revolved around power facilities. For example, whether the environment where the power supply was located has proper isolation, whether the main power supply affected the operation of the overall equipment, but the cooling problem was rarely monitored. Some enterprises have explored cooling technologies to facilitate cooling in data centers, such as liquid cooling. Typically, enterprises used loud fans to control airflow. In some countries, the cost of electricity was high, so there was a greater emphasis on being able to supply enough electricity for a given system configuration.

In the 1990s, the power density of the rack became a considered issue of enterprise data center environmental standards. In the past, a simple power factor calculation could yield the required cooling value for a data center, but accurate cooling values could not be provided by the increasing rack densities. At this point, enterprises had to re-plan the airflow patterns of data center racks and equipment. This required IT managers to know more statistics when designing a data center, such as pressure drop, air velocity, and flow resistance.

By the early 20th century, power densities were still increasing, and thermal modeling was seen as a potential answer to optimizing the cooling of data center environments. The lack of necessary data implied that temperature data typically was collected after data center construction and then IT managers needed to make adjustments based on that information. Enterprises should choose the correct thermal model of equipment when building a data center to enhance data center environmental monitoring. Here are several environmental control methods when building a data center.

5 Factors in Data Center Environmental Controls

For ensuring the reliable operation of IT equipment within a rack, the primary concerns for monitoring and controlling data center environmental conditions are temperature, humidity, static electricity, physical and human safety. Moreover, data center environmental impact resulted from these factors not only on the ecological environment but also on data center security, energy efficiency, and the enterprise social image.

Temperature Control

Thermal control is always a challenging issue for data centers, as servers emit heat when they are running. If they are paralyzed by overheating, it will cripple data center operations. Temperature control can check if equipment is operating within the recommended temperature range. A temperature sensor is an effective method to solve temperature control. Placing them in strategic locations and reading the overall temperature allows IT managers to conduct temperature control promptly.

Humidity Control

Humidity control is closely related to temperature levels. High humidity can corrode hardware. Low humidity levels can cause electrostatic arcing problems. For this reason, cooling and ventilation systems need to detect and control the relative humidity in the room air. ASHRAE recommends operation within a dew point range of 41.9 to 59 degrees Fahrenheit with a maximum relative humidity of 60%. Datacenter designers need to invest in systems that can detect humidity and water near equipment to better monitor cooling fans and measure the presence of airflow during routine management. Of course, it is also possible to use a set of computer room air conditioner(CRAC) units on larger facilities to create consistent airflow that flows throughout the room. These CRAC systems typically work by drawing in and cooling heat, then expelling it as cool air through vents and air intakes leading to the server.

Electricity Monitoring

Static electricity is also one of the threats in the data center environment, it is an invisible nuisance. Some newer IT components can be damaged or completely fried by less than 25 volts of discharge. If this problem isn’t addressed, it might result in frequent disconnections, system crashes, and even data corruption. Unexpected bursts of energy in the form of electrostatic discharges may be the greatest threat to the performance of the average data center. To prevent such incidents, businesses must install energy monitors that are strategically located to detect the buildup of static electricity.

Fire Suppression

A comprehensive fire suppression system is a must-have feature in data center environmental standards. If an entire data center is to be protected from disaster, data center designers need to take security measures from fire and fire suppression systems to physical and virtual systems. Fire suppression systems are subject to regular testing and active monitoring of the data center to ensure that they will indeed do their job in the clutch.

Security Systems

Data security of data center environmental standards is also very important. IT departments must institute a limit that keeps intruders away from buildings as well as server rooms and the racks they are in. Setting up a complete range of physical security is a desirable method—from IP surveillance systems to advanced sensors. If unauthorized personnel is detected entering a building or server rack, it will alert data center managers.

Summary

The purpose of data center environmental monitoring is to provide a better operating environment for facilities and avoid some unplanned cases that affect the business of enterprises. For the above data center environmental controls, it is beneficial for enterprises to maintain data center security when designing data centers, which is conducive to data center management. Also, it properly controls the data center environmental impact on ecology and energy efficiency.

Article Source: 5 Factors to Consider for Data Center Environmental Monitoring

Related Articles:

Things You Should Know About Data Center Power

What Is Data Center Security?

What Is Edge Computing?

Share

In today’s data-driven world, where businesses rely on real-time information to make critical decisions, edge computing has become the technology of choice. By moving some portion of compute, and storage resources from a central data center and closer to the data source, latency issues, bandwidth limitations, and network disruptions are greatly minimized.

With edge computing, data produced in a factory floor or retail store are processed and analyzed at the network’s edge and within the premises. Since data doesn’t travel across networks, speed is one obvious advantage. This translates to instant analysis of data, faster response by site personnel, and real-time decision-making.

How Edge Computing Works

Edge computing brings computing power closer to the data source, where sensors and other data capturing instruments are located. The entire edge computing process takes place inside intelligent devices that speed up the processing of the various data collected before the devices connect to the IoT.

The goal of edge computing is to boost efficiency. Instead of sending all the data collected by sensors to the enterprise applications for processing, edge devices do the computing and only send important data for further analysis or storage. This is possible thanks to edge AI, i.e., artificial intelligence at the edge.

After the edge devices do the computation of the data with the help of edge AI, these devices group the data collected or results obtained into different categories. The three basic categories are:

  • Data that doesn’t need further action and shouldn’t be stored or transmitted to enterprise applications.
  • Data that should be retained for further analysis or record keeping.
  • Data that requires an immediate response.

The work of edge computing is to discriminate between these data sets and identify the level of response and the action required, then act on it accordingly.

edge computing

Depending on the compute power of the edge device and the complexity of the data collected, the device may work on the outlier data and provide a real-time response. Or send it to the enterprise application for further analysis in real-time with immediate retrieval of the results. Since only the important and urgent data sets are sent over the network, there’s reduced bandwidth requirement. This results in substantial cost savings, especially with wireless cellular networks.

Why Edge Computing?

There are several reasons why edge computing is winning the popularity battle in the enterprise computing world. Digital transformation initiatives, from robotics & advanced automation to AI and data analytics, all have one thing in common – they are largely data-dependent. Most industries that leverage these technologies are also time-sensitive, meaning the data they produce becomes irrelevant in a matter of minutes, if not seconds.

The large amounts of data currently produced by IoT devices strain a shared computing 8model due to system congestion and network disruption. This results in huge financial losses, injuries, and costly damages for time and disruption-sensitive applications. The attractiveness of edge computing often narrows down to the three network challenges it seeks to solve. These are:

Latency – a lag in the communication between devices and network delays decision-making in time-sensitive applications. Edge computing solves this problem using a more distributed network, which ensures there’s no disconnect in real-time information transfer and processing. This gives a more reliable and consistent network.

Bandwidth – Every network has a limited bandwidth, especially wireless communications. Edge computing solves bandwidth limitations by processing immense volumes of data near the network’s edge then only sending the most relevant information through the network. This minimizes the volume of data that requires a cellular connection.

Data Compliance and Governance – organizations that handle sensitive data, are subject to data regulations of various countries. By processing this set of data near the source, these companies can keep the sensitive customer/employee data within their borders, hence ensuring compliance.

Edge Computing Use Cases

Over the years, edge data centers have found several use cases across industries, thanks to rapid tech adoption and the benefits of processing data at the network edge. Ideally, any application that requires moving large amounts of data to a centralized data center before retrieving the result and insights could benefit a lot from edge computing. Below are the different ways several industries use edge computing in their day-to-day operations:

Transportation – autonomous vehicles produce around 5 to 20 terabytes of data daily from information about speed, location, traffic conditions, road conditions, etc. This data must be organized, processed, and analyzed in real-time, and insights fed into the system while the vehicle is on the road. This time-sensitive application requires accurate, reliable, and consistent onboard computing.

Manufacturing – several manufacturers now deploy edge computing to monitor manufacturing processes and enable real-time analytics. By coupling this with machine learning and AI, edge computing can help streamline manufacturing processes with real-time insights, predictive analytics, and more.

Farming – indoor farming relies on different sensors that collect a wide range of data that must be processed and analyzed to gain insights into the crops’ health, weather conditions, nutrient density, etc. Edge computing makes this data processing and insights generation faster, hence faster response and decision making.

The other areas where edge computing has been adopted include healthcare facilities to help patients avoid health issues in real-time and retail to optimize vendor ordering and predict sales.

Edge Computing Challenges

Edge computing isn’t without its challenges, and some of the common ones revolve around security and data lifecycles. Applications that rely on IoT devices are vulnerable to data breaches, which could comprise security at the edge. As far as data lifecycles are concerned, the challenge comes in with the large amount of data stored at the network’s edge. A ton of useless data may take up critical space; hence businesses should keenly choose the data to keep and discard.

Edge computing also relies on some level of connectivity, and the typical network limitations are another cause for concern. It’s, therefore, necessary to plan for connectivity problems and design an edge computing deployment that can accommodate common networking issues.

Implementing Edge Computing

Regardless of the industry you are in, edge computing comes with several benefits, but only if it’s designed well and deployed to solve the challenges common with centralized data centers. To get the most from your investment, you want to work with a reputed edge computing company or an expert IT consultant to guide you on the best way forward.

Article Source: What Is Edge Computing?

Related Articles:

Edge Computing vs. Multi-Access Edge Computing

Micro Data Center and Edge Computing

Why Green Data Center Matters

Share

Background

Green data centers appear in the concept of enterprise construction, due to the continuous growth of new data storage requirements and the steady enhancement of green environmental protection awareness. Newly retained data must be protected, cooled, and transferred efficiently. This means that the huge energy demands of data centers present challenges in terms of cost and sustainability, and enterprises are increasingly concerned about the energy demands of their data centers. It can be seen that sustainable and renewable energy resources have become the development trend of green data centers.

Green Data Center Is a Trend

A green data center is a facility similar to a regular data center that hosts servers to store, manage, and disseminate data. It is designed to minimize environmental impact by providing maximum energy efficiency. Green data centers have the same characteristics as typical data centers, but the internal system settings and technologies can effectively reduce energy consumption and carbon footprints for enterprises.

The internal construction of a green data center requires the support of a series of services, such as cloud services, cable TV services, Internet services, colocation services, and data protection security services. Of course, many enterprises or carriers have equipped their data centers with cloud services. Some enterprises may also need to rely on other carriers to provide Internet and related services.

According to market trends, the global green data center market is worth around $59.32 billion in 2021 and is expected to grow at a CAGR of 23.5% in the future to 2026. It also shows that the transition to renewable energy sources is accelerating because of the growth of green data centers.

As the growing demand for data storage drives the modernization of data centers, it also places higher demands on power and cooling systems. On the one hand, data centers need to convert non-renewable energy into electricity to generate electricity, resulting in rising electricity costs; on the other hand, some enterprises need to complete the construction of cooling facilities and server cleaning through a lot of water, all of which are ample opportunities for the green data center market. For example, Facebook and Amazon continue to expand their businesses, which has also increased the need for data storage of global companies. These enterprises need a lot of data to complete the analysis of potential customers, but these data processing needs will require a lot of energy. Therefore, the realization of green data centers has become an urgent need for enterprises to solve these problems, and this can also bring more other benefits to enterprises.

Green Data Center Benefits

The green data center concept has grown rapidly in the process of enterprise data center development. Many businesses prefer alternative energy solutions for their data centers, which can bring many benefits to the business. The benefits of green data centers are as follows.

Energy Saving

Green data centers are designed not only to conserve energy, but also to reduce the need for expensive infrastructure to handle cooling and power needs. Sustainable or renewable energy is an abundant and reliable source of energy that can significantly reduce power usage efficiency (PUE). The reduction of PUE enables enterprises to use electricity more efficiently. Green data centers can also use colocation services to decrease server usage, lower water consumption, and reduce the cost of corporate cooling systems.

Cost Reduction

Green data centers use renewable energy to reduce power consumption and business costs through the latest technologies. Shutting down servers that are being upgraded or managed can also help reduce energy consumption at the facility and control operating costs.

Environmental Sustainability

Green data centers can reduce the environmental impact of computing hardware, thereby creating data center sustainability. The ever-increasing technological development requires the use of new equipment and technologies in modern data centers, and the power consumption of these new server devices and virtualization technologies reduces energy consumption, which is environmentally sustainable and brings economic benefits to data center operators.

Enterprise Social Image Enhancement

Today, users are increasingly interested in solving environmental problems. Green data center services help businesses resolve these issues quickly without compromising performance. Many customers already see responsible business conduct as a value proposition. Enterprises, by meeting compliance, regulatory requirements and regulations of the corresponding regions through the construction of green data centers, improve the image of their own social status.

Reasonable Use of Resources

In an environmentally friendly way, green data centers can allow enterprises to make better use of various resources such as electricity, physical space, and heat, integrating the internal facilities of the data center. It promotes the efficient operation of the data center while achieving rational utilization of resources.

5 Ways to Create a Green Data Center

After talking about the benefits of a green data center, then how to build a green data center. Here are a series of green data center solutions.

  • Virtualization extension: Enterprises can build a virtualized computer system with the help of virtualization technology, and run multiple applications and operating systems through fewer servers, thereby realizing the construction of green data centers.
  • Renewable energy utilization: Enterprises can opt for solar panels, wind turbines or hydroelectric plants that can generate energy to power backup generators without any harm to the environment.
  • Enter eco mode: Using an Alternating current USPs is one way to switch eco mode. This setup can significantly improve data center efficiency and PUE. Alternatively, enterprises can reuse equipment, which not only saves money, but also eliminates unnecessary emissions from seeping into the atmosphere.
  • Optimized cooling: Data center infrastructure managers can introduce simple and implementable cooling solutions, such as deploying hot aisle/cold aisle configurations. Data centers can further accelerate cooling output by investing in air handlers and coolers, and installing economizers that draw outside air from the natural environment to build green data center cooling systems.
  • DCIM and BMS systems: DCIM software and BMS software can help data centers managers identify and document ways to use more efficient energy, helping data centers become more efficient and achieve sustainability goals.

Conclusion

Data center sustainability means reducing energy/water consumption and carbon emissions to offset increased computing and mobile device usage to keep business running smoothly. The development of green data centers has become an imperative development trend, and it also caters to the green goals of global environmental protection. As a beneficiary, enterprises can not only save operating costs, but also effectively reduce energy consumption. This is also an important reason for the construction of green data centers.

Article Source: Why Green Data Center Matters

Related Articles:

Data Center Infrastructure Basics and Management Solutions

What Is a Data Center?

Cloud vs Data Center: What’s the Difference?

Share

Many people may be confused about what is cloud computing and what is data center. They often ask questions like, “Is a cloud a data center?”, “Is a data center a cloud?” or “Are data center and cloud computing two completely different things?” Maybe you know your company needs the cloud and a date center. And you also know your data center needs the cloud and vice versa. But you just don’t know why! Don’t worry. This essay will help you have a thorough understanding of the two terms and tell you the difference between cloud vs data center. Let’s begin with their definition first.

Cloud vs Data Center: What Are They?
cloud computing vs data center

The term “data center” can be interpreted in a few different ways. First, an organization can run an in-house data center maintained by trained IT employees whose job is to keep the system up and running. Second, it can refer to an offsite storage center that consists of servers and other equipment needed to keep the stored data accessible both virtually and physically.

While the term “cloud” or “cloud computing” didn’t exist before the advent of Internet. Cloud computing changes the way businesses work. Rather than storing data locally on individual computers or a company’s network, cloud computing entails the delivery of data and shared resources via a secure and centralized remote platform. Rather than using a company’s own servers, it places its resources in the hands of a third-party organization that offers such a service.

Cloud vs Data Center in Security
Cloud Computing vs Data Center

Since the cloud is an external form of computing, it may be less secure or require more work to ensure security than a data center. Unlike data centers, where you are responsible for your own security, you will be entrusting your data to a third-party provider that may or may not have the most up-to-date security certifications. If your cloud are placed on several data centers in different locations, each location will also need the proper measures to ensure the security.

A data center is also physically connected to a local network, which makes it easier to ensure that only those with company-approved credentials and equipment can access stored apps and information. The cloud, however, is accessible by anyone with the proper credentials anywhere that there is an Internet connection. This opens a wide array of entry and exit points, all of which need to be protected to make sure that data transmitted to and from these points are secure.

Cloud VS Data Center in Cost
difference between cloud vs data center

For most small businesses, cloud computing is a more cost-effective option than a data center. Because when you chose a data center, you have to build an infrastructure from the start and will be responsible for your own maintenance and administration. Besides, a data center takes much longer to get started and can cost businesses $10 million to $25 million per year to operate and maintain.

Unlike a data center, cloud computing does not require time or capital to get up and running. Instead, most cloud computing providers offer a range of affordable subscription plans to meet customers’ budget and scale the service to their actual needs. And data centers take time to build,  whereas cloud services are available for use almost immediately after registration.

Conclusion

Going forward, cloud computing services will become increasingly attractive with a low cost and convenient service. It creates a new way to facilitate collaboration and information access across great geographic distances while reducing the costs. Therefore, compared cloud computing vs data center, the future of cloud computing is definitely much brighter.

Related Article: The Evolution of Data Center Switching

Three Types MTP Harness Cables Used in Today’s Data Center

Share

As we know, harness cables are generally used to connect high-density switches with LC serial transceivers installed. The transition harness connects to the pre-installed MTP backbone trunk cable and then furcates to LC connectors entering the switch. This kind of MTP-LC harness cables are usually supplied in short lengths because they are normally only used for “in-rack” connections. Transition harnesses are available for Base-8, 12 and 24 backbones and the LC tails are numbered for clear port identification and traceability.

MTP Harness Cable

Application Scene

MTP-LC harness cables application

Another harness cable type is conversion harness cables, which allow users to convert their existing MTP backbone cables to an MTP type which matches their active equipment. Conversion harnesses are a low-loss alternative to conversion modules because they eliminate one mated MTP pair across the link. Many of today’s legacy infrastructures are built using a Base-12 MTP backbone design, however experience shows us that this connector is rarely used on higher data rate switches or servers. Currently Base-8 is the preferred connector for 40G (SR4) transceivers and Base-24 is the preferred connector for 100G transceivers (SR10).

MTP Harness Cable

The final type of harness cable is MTP trunk harness cables. MTP trunk harness cables are high density multi-stranded cables which form the backbone of the data center. This kind of trunk harness cables are available in different fiber-counts up to 144 fibers, which reduce the installation time by consolidating multiple sub-units into a single cable. This approach significantly reduces the overall diameter of the cable and provides much better space utilization of cable routing channels. Just as two types harness cables mentioned, the MTP trunk harness cables are also available with 8, 12 and 24 fiber sub-units so that users can deploy Base-8, Base-12 or Base-24 infrastructures to suit their MTP connectivity requirements.

MTP harness cable

Conversion and Trunk harness Cable Application

Conversion-and-Trunk-harness-cable-Application

Data Center Upgrade — Who Should Be Responsible for Buying Transceivers?

Share

There was a time that cable products specifically associated with hardware OEMs. If a company was buying or using one of these vendors’ products, the matching cables also had to be used. Therefore, whoever was responsible for managing the hardware was also responsible for the cabling used to connect the devices together. Then, the structured cabling industry replaced this. The cabling infrastructure is now viewed as an independent asset separate from the IT hardware. This has allowed companies to make purchasing decisions for IT and cabling without the concern of each other. But this may be a problem. To understand the problem, let’s understand the LAN network operation principles first.

transceiver

The OSI Model of LAN Network
As we know, the operation of local area networking (LAN) was defined with the Open Systems Interconnection Reference Model (OSI Model). The OSI Model defined seven layers of operation. By using the model, the industry could develop networking functions in a modular fashion and still ensure interoperability. The bottom of the stack is Layer 1, the Physical Layer. Layer 1 includes the cabling that is used to connect the various pieces of equipment together so that the data can be transported. The next step up on the stack is Layer 2, the Data Link Layer. Layer 2 provides for addressing and switching, so that the data can be sent to the appropriate destination. Layer 3 is the Network Layer, where data can be routed to another network. Layers 4 through 7 deal with software implementations.

OSI Model
The OSI Model meant that an end-user could purchase software (Layer 7) and expect it to work on multiple vendors’ hardware (Layer 2). And the hardware could be connected using multiple vendors (Layer 1). Structured cabling now had a home within Layer 1. Then this module leads to division of responsibility, for cabling versus network design specifications. The end-user ended up having “cabling people” and “networking people” on their staff. Each group of people used their own set of vendors and supply chains to specify and source their materials. And they each only needed a very basic understanding of what the other people were doing. This system has worked very well for the enterprise LAN. So what’s the problem?

What Is the Problem?
In the 1990s, copper cable was widely used in data center cabling deployment. As time went on, optical fiber cable was added. In fiber switches, it is common to use pluggable transceivers. This is done for a variety of reasons, but one is cost. Even though a transceiver is plugged into a switch, it is part of the OSI Model’s Layer 1, the Physical Layer. Additionally, most of the transceiver is part of the Physical Media Dependent (PMD) portion of Layer 1, as illustrated here. This means that the transceiver and the cable types must match.

transceiver Physical Media Dependent
However, unlike copper, there was never a fixed standard on the connector type or channel distance. Fiber may have many different standards and connector options. With multiple fiber types, multiple operating wavelengths, and multiple connectivity options, the number of solutions seemed limitless. Since the transceiver is physically plugged into the switch, it has always been considered the networking group’s responsibility. “Networking people” are responsible for buying transceivers and “cabling cable” are responsible for buying cabling products, then this causes the problem. Let’s take the following real-life case for example.

Real-life Case and Solution
Company A has a data center. Marsha is the facilities manager and is responsible for the data cabling. She has designed a cabling plan that has migrated from 1G into 10G. Anticipating the 40G requirements defined by IEEE 802.3ba (40GBase-SR4), she used a cassette-based platform to allow for the transition from LC connectivity of 10G to the MPO connectivity of 40G. Greg is the network manager. As the migration to 40G switches was about to commence, his hardware vendor recommended that they change to a new unique transceiver solution that used LC connectivity. This appeared to be a great idea because it would mean that Marsha would not have to change any of her connectivity. However, he did not consult with Marsha, because the hardware decisions are his to make. When the 40G switches arrived, Marsha was surprised by the connectivity choice because it limited her power budget. So this division causes the problem.

data center transceiver
Greg needs to have a 40G connection from Rack A to Rack B. From a Layer 2/3 perspective, that is all that matters. He still has the responsibility and complete control to define his needs and select equipment vendors for things like switches, routers, servers, etc. Instead of defining the form of the data rate, he simply specifies the speed. By shifting the single component (pluggable transceiver) from Greg to Marsha, the organization can make its decision much more efficiently. Greg does not have to worry about the variety of fiber and transceiver options, nor the impacts that they have on each other. And Marsha can manage the entire optical link, from transceiver to transceiver, which is all within Layer 1. Her experience with fiber and connectivity options puts her in a better position to determine which transceiver options are the most appropriate.

Conclusion
Looking back, the onset of structured cabling separated the cabling purchasing from the IT hardware purchasing. Looking at present-day and into the future, rapidly increasing data rates, especially in the data center are requiring another shift in the way we conduct business. By redefining the link to include not only cabling and connectivity, but also the transceiver, we put Layer 1 performance in the hands of the people most familiar with it. FS.COM provides a full range of transceivers and matched cabling products with the most cost-effective price. Aimed at offering a high performance-price ratio solutions for you.

The Era of Fusion Splicing Is Coming

Share

Fusion splicingAs fiber deployment has become mainstream, splicing has naturally crossed from the outside plant (OSP) world into the enterprise and even the data center environment. Fusion splicing involves the use of localized heat to melt together or fuse the ends of two optical fibers. The preparation process involves removing the protective coating from each fiber, precise cleaving, and inspection of the fiber end-faces. Fusion splicing has been around for several decades, and it’s a trusted method for permanently fusing together the ends of two optical fibers to realize a specific length or to repair a broken fiber link. However, due to the high costs of fusion splicers, it has not been actively used by many people. But these years some improvements in optical technology have been changing this status. Besides, the continued demand for increased bandwidth also spread the application of fusion splicing.

New Price of Fusion Splicers
Fusion splicers costs have been one of the biggest obstacles to a broad adoption of fusion splicing. In recent years, significant decreases in splicer prices has accelerated the popularity of fusion splicing. Today’s fusion splicers range in cost from $7,000 to $40,000. The highest-priced units are designed for specialty optical fibers, such as polarization-maintaining fibers used in the production of high-end non-electrical sensors. The lower-end fusion splicers, in the $7,000 to $10,000 range, are primarily single-fiber fixed V-groove type devices. The popular core alignment splicers range between $17,000 and $19,000, well below the $30,000 price of 20 years ago. The prices have dropped dramatically due to more efficient manufacturing, and volume is up because fiber is no longer a voodoo science and more people are working in that arena. Recently, more and more fiber being deployed closer to the customer premise with higher splice-loss budgets, which results in a greater participation of customers who are purchasing lower-end splicers to accomplish their jobs.

More Cost-effective Cable Solutions
The first and primary use of splicing in the telecommunications industry is to link fibers together in underground or aerial outside-plant fiber installations. It used to be very common to do fusion splicing at the building entrance to transition from outdoor-rated to indoor-rated cable, because the NEC (National Electrical Code) specifies that outdoor-rated cable can only come 50 feet into a building due to its flame rating. The advent of plenum-rated indoor/outdoor cable has driven that transition splicing to a minimum. But that’s not to say that fusion splicing in the premise isn’t going on.

Longer distances in the outside plant could mean that sticking with standard outdoor-rated cable and fusion splicing at the building entrance could be the more economical choice. If it’s a short run between building A and B, it makes sense to use newer indoor/outdoor cable and come right into the crossconnect. However, because indoor/outdoor cables are generally more expensive, if it’s a longer run with lower fiber counts between buildings, it could ultimately be cheaper to buy outdoor-rated cable and fusion splice to transition to indoor-rated cable, even with the additional cost of splice materials and housing.

As fiber to the home (FTTH) applications continue to grow around the globe, it is another situation that may call for fusion splicing. If you want to achieve longer distance in a FTTH application, you have to either fusion splice or do an interconnect. However, an interconnect can introduce 0.75dB of loss while the fusion splice is typically less than 0.02dB. Therefore, the easiest way to minimize the amount of loss on a FTTH circuit is to bring the individual fibers from each workstation back to the closet and then splice to a higher-fiber-count cable. This approach also enables centralizing electronics for more efficient port utilization. In FTTH applications, fusion splicing is now being used to install connectors for customer drop cables using new splice-on connector technology and drop cable fusion splicer.

FTTH drop cable fusion splicer

A Popular Option for Data Centers
A significant increase in the number of applications supported by data centers has resulted in more cables and connections than ever, making available space a foremost concern. As a result, higher-density solutions like MTP/MPO connectors and multi-fiber cables that take up less pathway space than running individual duplex cables become more popular.

Since few manufacturers offer field-installable MTP/MPO connectors, many data center managers are selecting either multi-fiber trunk cables with MTP/MPOs factory-terminated on each end, or fusion splicing to pre-terminated MTP/MPO or multi-fiber LC pigtails. When you select trunk cables with connectors on each end, data center managers often specify lengths a little bit longer because they can’t always predict exact distances between equipment and they don’t want to be short. However, they then have to deal with excess slack. When there are thousands of connections, that slack can create a lot of congestion and limit proper air flow and cooling. One alternative is to purchase a multi-fiber pigtail and then splice to a multi-fiber cable.

Inside the data center and in the enterprise LAN, 12-fiber MPO connectors provide a convenient method to support higher 40G and 100G bandwidth. Instead of fusing one fiber at a time, another type of fusion splicing which is called ribbon/mass fusion splicing is used. Ribbon/mass fusion splicing can fuse up to all 12 fibers in one ribbon at once, which offers the opportunity to significantly reduce termination labor by up to 75% with only a modest increase in tooling cost. Many of today’s cables with high fiber count involve subunits of 12 fibers each that can be quickly ribbonized. Splicing those fibers individually is very time consuming, however, ribbon/mass fusion splicers splice entire ribbons simultaneously. Ribbon/mass fusion splicer technology has been around for decades and now is available in handheld models.

Ribbon/Mass Fusion Splicer

Conclusion
Fusion splicing provides permanent low-loss connections that are performed quickly and easily, which are definite advantages over competing technologies. In addition, current fusion splicers are designed to provide enhanced features and high-quality performance, and be very affordable at the same time. Fiberstore provides various types and uses of fusion splicers with high quality and low price. For more information, please feel free to contact us at sales@fs.com.