Author Archives: Alice.Gui

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Share

The COVID-19 pandemic caused several companies to shut down, and the implications were reduced production and altered supply chains. In the tech world, where silicon microchips are the heart of everything electronic, raw material shortage became a barrier to new product creation and development.

During the lockdown periods, some essential workers were required to stay home, which meant chip manufacturing was unavailable for several months. By the time lockdown was lifted and the world embraced the new normal, the rising demand for consumer and business electronics was enough to ripple up the supply chain.

Below, we’ve discussed the challenges associated with the current chip shortage, what to expect moving forward, and the possible interventions necessary to overcome the supply chain constraints.

Challenges Caused by the Current Chip Shortage

As technology and rapid innovation sweeps across industries, semiconductor chips have become an essential part of manufacturing – from devices like switches, wireless routers, computers, and automobiles to basic home appliances.

To understand and quantify the impact this chip shortage has caused spanning the industry, we’ll need to look at some of the most affected sectors. Here’s a quick breakdown of how things have unfolded over the last eighteen months.

Automobile Industry

in North America and Europe had slowed or stopped production due to a lack of computer chips. Major automakers like Tesla, Ford, BMW, and General Motors have all been affected. The major implication is that the global automobile industry will manufacture 4 million fewer cars by the end of 2021 than earlier planned, and it will forfeit an average of $110 billion in revenue.

Consumer Electronics

Consumer electronics such as desktop PCs and smartphones rose in demand throughout the pandemic, thanks to the shift to virtual learning among students and the rise in remote working. At the start of the pandemic, several automakers slashed their vehicle production forecasts before abandoning open semiconductor chip orders. And while the consumer electronics industry stepped in and scooped most of those microchips, the supply couldn’t catch up with the demand.

Data Centers

Most chip fabrication companies like Samsung Foundries, Global Foundries, and TSMC prioritized high-margin orders from PC and data center customers during the pandemic. And while this has given data centers a competitive edge, it isn’t to say that data centers haven’t been affected by the global chip shortage.

Some of the components data centers have struggled to source include those needed to put together their data center switching systems. These include BMC chips, capacitors, resistors, circuit boards, etc. Another challenge is the extended lead times due to wafer and substrate shortages, as well as reduced assembly capacity.

LED Lighting

LED backlights common in most display screens are powered by hard-to-find semiconductor chips. The prices of gadgets with LED lighting features are now highly-priced due to the shortage of raw materials and increased market demand. This is expected to continue up to the beginning of 2022.

Renewable Energy- Solar and Turbines

Renewable energy systems, particularly solar and turbines, rely on semiconductors and sensors to operate. The global supply chain constraints have hurt the industry and even forced some energy solutions manufacturers like Enphase Energy to

Semiconductor Trends: What to Expect Moving Forward

In response to the global chip shortage, several component manufacturers have ramped up production to help mitigate the shortages. However, top electronics and semiconductor manufacturers say the crunch will only worsen before it gets better. Most of these industry leaders speculate that the semiconductor shortage could persist into 2023.

Based on the ongoing disruption and supply chain volatility, various analysts in a recent CNBC article and Bloomberg interview echoed their views, and many are convinced that the coming year will be challenging. Here are some of the key takeaways:

Pat Gelsinger, CEO of Intel Corp., noted in April 2021 that the chip shortage would recover after a couple of years.

DigiTimes Report found that Intel and AMD server ICs and data centers have seen their lead times extend to 45 to 66 weeks.

The world’s third-largest EMS and OEM provider, Flex Ltd., expects the global semiconductor shortage to proceed into 2023.

In May 2021, Global Foundries, the fourth-largest contract semiconductor manufacturer, signed a $1.6 billion, 3-year silicon supply deal with AMD, and in late June, it launched its new $4 billion, 300mm-wafer facility in Singapore. Yet, the company says its production capacity will only increase component production earliest in 2023.

TMSC, one of the leading pure-play foundries in the industry, says it won’t meaningfully increase the component output until 2023. However, it’s optimistic that the company will ramp up the fabrication of automotive micro-controllers by 60% by the end of 2021.

From the industry insights above, it’s evident that despite the many efforts that major players put into resolving the global chip shortage, the bottlenecks will probably persist throughout 2022.

Additionally, some industry observers believe that the move by big tech companies such as Amazon, Microsoft, and Google to design their own chips for cloud and data center business could worsen the chip shortage crisis and other problems facing the semiconductor industry.

article, the authors hint that the entry of Microsoft, Amazon, and Google into the chip design market will be a turning point in the industry. These tech giants have the resources to design superior and cost-effective chips of their own, something most chip designers like Intel have in limited proportions.

Since these tech giants will become independent, each will be looking to create component stockpiles to endure long waits and meet production demands between inventory refreshes. Again, this will further worsen the existing chip shortage.

Possible Solutions

To stay ahead of the game, major industry players such as chip designers and manufacturers and the many affected industries have taken several steps to mitigate the impacts of the chip shortage.

For many chip makers, expanding their production capacity has been an obvious response. Other suppliers in certain regions decided to stockpile and limit exports to better respond to market volatility and political pressures.

Similarly, improving the yields or increasing the number of chips manufactured from a silicon wafer is an area that many manufacturers have invested in to boost chip supply by some given margin.

Here are the other possible solutions that companies have had to adopt:

Embracing flexibility to accommodate older chip technologies that may not be “state of the art” but are still better than nothing.

Leveraging software solutions such as smart compression and compilation to build efficient AI models to help unlock hardware capabilities.

LED Lighting

The latest global chip shortage has led to severe shocks in the semiconductor supply chain, affecting several industries from automobile, consumer electronics, data centers, LED, and renewables.

Industry thought leaders believe that shortages will persist into 2023 despite the current build-up in mitigation measures. And while full recovery will not be witnessed any time soon, some chip makers are optimistic that they will ramp up fabrication to contain the demand among their automotive customers.

That said, staying ahead of the game is an all-time struggle considering this is an issue affecting every industry player, regardless of size or market position. Expanding production capacity, accommodating older chip technologies, and leveraging software solutions to unlock hardware capabilities are some of the promising solutions.

Added

This article is being updated continuously. If you want to share any comments on FS switches, or if you are inclined to test and review our switches, please email us via media@fs.com or inform us on social media platforms. We cannot wait to hear more about your ideas on FS switches.

Article Source: The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Related Articles:

Impact of Chip Shortage on Datacenter Industry

Impact of Chip Shortage on Datacenter Industry

Share

As the global chip shortage let rip, many chip manufacturers have to slow or even halt semiconductor production. Makers of all kinds of electronics such as switches, PCs, servers are all scrambling to get enough chips in the pipeline to match the surging demand for their products. Every manufacturer, supplier and solution provider in datacenter industry is feeling the impact of the ongoing chip scarcity. However, relief is nowhere in sight yet.

What’s Happening?

Due to the rise of AI and cloud computing, datacenter chips have been a highly charged topic in recent times. As networking switches and modern servers, indispensable equipment in datacenter applications, use more advanced components than an average consumer’s PC, naturally when it comes to chip manufacturers and suppliers, data centers are given the top priority. However, with the demand for data center machines far outstripping supply, chip shortages may continue to be pervasive across the next few years. Coupled with economic uncertainties caused by the pandemic, it further puts stress on datacenter management.

According to a report from the Dell’Oro Group, robust datacenter switch sales over the past year could foretell a looming shortage. As the mismatch in supply and demand keeps growing, enterprises looking to buy datacenter switches face extended lead times and elevated costs over the course of the next year.

“So supply is decreasing and demand is increasing,” said Sameh Boujelbene, leader of the analyst firm’s campus and data-center research team. “There’s a belief that things will get worse in the second half of the year, but no consensus on when it’ll start getting better.”

Back in March, Broadcom said that more than 90% of its total chip output for 2021 had already been ordered by customers, who are pressuring it for chips to meet booming demand for servers used in cloud data centers and consumer electronics such as 5G phones.

“We intend to meet such demand, and in doing so, we will maintain our disciplined process of carefully reviewing our backlog, identifying real end-user demand, and delivering products accordingly,” CEO Hock Tan said on a conference call with investors and analysts.

Major Implications

Extended Lead Times

Arista Networks, one of the largest data center networking switch vendors and a supplier of switches to cloud providers, foretells that switch-silicon lead times will be extended to as long as 52 weeks.

“The supply chain has never been so constrained in Arista history,” the company’s CEO, Jayshree Ullal, said on an earnings call. “To put this in perspective, we now have to plan for many components with 52-week lead time. COVID has resulted in substrate and wafer shortages and reduced assembly capacity. Our contract manufacturers have experienced significant volatility due to country specific COVID orders. Naturally, we’re working more closely with our strategic suppliers to improve planning and delivery.”

Hock Tan, CEO of Broadcom, also acknowledged on an earnings call that the company had “started extending lead times.” He said, “part of the problem was that customers were now ordering more chips and demanding them faster than usual, hoping to buffer against the supply chain issues.”

Elevated Cost

Vertiv, one of the biggest sellers of datacenter power and cooling equipment, mentioned it had to delay previously planned “footprint optimization programs” due to strained supply. The company’s CEO, Robert Johnson, said on an earnings call, “We have decided to delay some of those programs.”

Supply chain constraints combined with inflation would cause “some incremental unexpected costs over the short term,” he said, “To share the cost with our customers where possible may be part of the solution.”

“Prices are definitely going to be higher for a lot of devices that require a semiconductor,” says David Yoffie, a Harvard Business School professor who spent almost three decades serving on the board of Intel.

Conclusion

There is no telling that how the situation will continue playing out and, most importantly, when supply and demand might get back to normal. Opinions vary on when the shortage will end. The CEO of chipmaker STMicro estimated that the shortage will end by early 2023. Intel CEO Patrick Gelsinger said it could last two more years.

As a high-tech network solutions and services provider, FS has been actively working with our customers to help them plan for, adapt to, and overcome the supply chain challenges, hoping that we can both ride out this chip shortage crisis. At least, we cannot lose hope, as advised by Bill Wyckoff, vice president at technology equipment provider SHI International, “This is not an ‘all is lost’ situation. There are ways and means to keep your equipment procurement and refresh plans on track if you work with the right partners.”

Article Source: Impact of Chip Shortage on Datacenter Industry

Related Articles:

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Data Center Infrastructure Basics and Management Solutions

Share

Datacenter infrastructure refers to all the physical components in a data center environment. These physical components play a vital role in the day-to-day operations of a data center. Hence, data center management challenges are an urgent issue that IT departments need to pay attention to. On the one hand, it is to improve the energy efficiency of the data center; on the other hand, it is to know about the operating performance of the data center in real-time ensuring its good working condition and maintaining enterprise development.

Data Center Infrastructure Basics

The standard for data center infrastructure is divided into four tiers, each of which consists of different facilities. They mainly include cabling systems, power facilities, cooling facilities, network infrastructure, storage infrastructure, and computing resources.

There are roughly two types of infrastructure inside a data center: the core components and IT infrastructure. Network infrastructure, storage infrastructure, and computing resources belong to the former, while cooling equipment, power, redundancy, etc. belong to the latter.

Core Components

Network, storage, and computing systems are vital infrastructures for data centers to achieve sharing access to applications and data, providing data centers with shared access to applications and data. Also, they are the core components of data centers.

Network Infrastructure

Datacenter network infrastructure is a combination of network resources, consisting of switches, routers, load balancing, analytics, etc., to facilitate the storage and processing of applications and data. Modern data center networking architectures, through using full-stack networking and security virtualization platforms that support a rich set of data services, can achieve connecting everything from VMs, containers, and bare-metal applications, while enabling centralized management and fine-grained security controls.

Storage Infrastructure

Datacenter storage is a general term for the tools, technologies and processes for designing, implementing, managing and monitoring storage infrastructure and resources in data centers, mainly referring to the equipment and software technologies that implement data and application storage in data center facilities. These include hard drives, tape drives and other forms of internal and external storage and backup management software utilities external storage facilities/solutions.

Computing Resources

A data center meter is a memory and processing power to run applications, usually provided by high-end servers. In the edge computing model, the processing and memory used to run applications on servers may be virtualized, physical, distributed among containers or distributed among remote nodes.

IT Infrastructure

As data centers become critical to enterprise IT operations, it is equally important to keep them running efficiently. When designing data center infrastructure, it is necessary to evaluate its physical environment, including cabling system, power system, cooling system to ensure the security of the physical environment of the data center.

Cabling Systems

The integrated cabling is an important part of data center cable management, supporting the connection, intercommunication and operation of the entire data center network. The system is usually composed of copper cables, optical cables, connectors and wiring equipment. The application of the data center integrated wiring system has the characteristics of high density, high performance, high reliability, fast installation, modularization, future-oriented, and easy application.

Power Systems

Datacenter digital infrastructure requires electricity to operate. Even an interruption of a fraction of a second will result in a significant impact. Hence, power infrastructure is one of the most critical components of a data center. The data center power chain starts at the substation and ends up through building transformers, switches, uninterruptible power supplies, power distribution units, and remote power panels to racks and servers.

Cooling Systems

Data center servers generate a lot of heat while running. Based on this characteristic, cooling is critical to data center operations, aiming to keep systems online. The amount of power each rack can keep cool by itself places a limit on the amount of power a data center can consume. Generally, each rack can allow the data center to operate at an average 5-10 kW cooling density, but some may be higher.

data center

Data Center Infrastructure Management Solutions

Due to the complexity of IT equipment in a data center, the availability, reliability, and maintenance of its components require more attention. Efficient data center operations can be achieved through balanced investments in facilities and accommodating equipment.

Energy Usage Monitoring Equipment

Traditional data centers lack the energy usage monitoring instruments and sensors required to comply with ASHRAE standards and collect measurement data for use in calculating data center PUE. It results in a poor monitoring environment for the power system of the data center. One measure is to install energy monitoring components and systems on power systems to measure data center energy efficiency. Enterprise teams can implement effective strategies by the measure to balance overall energy usage efficiency and effectively monitor the energy usage of all other nodes.

Cooling Facilities Optimization

Independent computer room air conditioning units used in traditional data centers often have separate controls and set points, resulting in excessive operation due to temperature and humidity adjustments. It’s a good way for helping servers to achieve cooling by creating hot-aisle/cold-aisle layouts to maximize the flow of cold air to the equipment intakes and the hot exhaust air from the equipment racks. The creation of hot or cold aisles can eliminate the mixing of hot and cold air by adding partitions or ceilings.

CRAC Efficiency Improvement

Packaged DX air conditioners likely compose the most common type of cooling equipment for smaller data centers. These units are often described as CRAC units. There are, however, there are several ways to improve the energy efficiency of the cooling system employing DX units. Indoor CRAC units are available with a few different heat rejection options.

  • – As with rooftop units, adding evaporative spray can improve the efficiency of air-cooled CRAC units.
  • – A pre-cooling water coil can be added to the CRAC unit upstream of the evaporator coil. When ambient conditions allow the condenser water to be cooled to the extent that it provides direct cooling benefits to the air entering the CRAC unit, the condenser water is diverted to the pre-cooling coil. This will reduce or sometimes eliminate the need for compressor-based cooling for the CRAC unit.

DCIM

Data center infrastructure management is the combination of IT and operations to manage and optimize the performance of data center infrastructure within an organization. DCIM tools help data center operators monitor, measure, and manage the utilization and energy consumption of data center-related equipment and facility infrastructure components, effectively improving the relationship between data center buildings and their systems.

DCIM enables bridging of information across organizational domains such as data center operations, facilities, and IT to maximize data center utilization. Data center operators create flexible and efficient operations by visualizing real-time temperature and humidity status, equipment status, power consumption, and air conditioning workloads in server rooms.

Preventive Maintenance

In addition to the above management and operation solutions for infrastructure, unplanned maintenance is also an aspect to consider. Unplanned maintenance typically costs 3-9 times more than planned maintenance, primarily due to overtime labor costs, collateral damage, emergency parts, and service calls. IT teams can create a recurring schedule to perform preventive maintenance on the data center. Regularly checking the infrastructure status and repairing and upgrading the required components promptly can keep the internal infrastructure running efficiently, as well as extend the lifespan and overall efficiency of the data center infrastructure.

Article Source: Data Center Infrastructure Basics and Management Solutions

Related Articles:

Data Center Migration Steps and Challenges

What Are Data Center Tiers?

Understanding Data Center Redundancy

Share

Maximizing the uptime should be the top priority for every data center, be they small or hyperscale. To keep your data center constantly running, a plan for redundancy systems is a must.

What Is Data Center Redundancy?

Data center redundancy refers to a system design where critical components such as UPS units, cooling systems and backup generators are duplicated so that data center operations can continue even if a component fails. For example, a redundant UPS system starts working when a power outage happens. In the event of downtime due to hazardous weather, power outages, or component failures, data center backup components play their role to keep the whole system running.

Why Is Data Center Redundancy Important?

It is imperative for businesses to increase uptime and recover more quickly from downtime, whether unexpected or planned. Downtime hurts business. It can have serious and direct impact on brand images, business operations, and customer experience, resulting in devastating financial losses, missed business opportunities and a tarnished reputation. Even for small businesses, unscheduled downtime can still cost hundreds of dollars per minute.

Redundancy configuration in data centers helps cut the risk of downtime, thus reducing losses caused by undesired impacts. A well-planned redundancy design means shorter potential downtime in the long run. Moreover, redundant components also ensure that data is safe and secure as data center operations keep working and never fail.

Redundancy is also a crucial factor in gauging data center reliability, performance and availability. The Uptime Institute offers a tier classification system that certifies data centers according to four distinct tiers—Tier 1, Tier 2, Tier 3 and Tier 4. Each tier has strict and specific requirements around data center redundancy level.

Different Levels of Redundancy

There is no one-size-fits-all redundancy design. Lower levels of redundancy mean increased potential downtime in the long run. While more redundancy will result in less downtime but increased costs of maintaining the redundant components. however, if your business model requires as little downtime as possible, this is often justifiable in terms of profit and overall net growth. To choose the right configuration for your business, it is important to recognize the capabilities and risks of different redundancy models, including N, N+1, N+X, 2N, 2N+1, and 3N/2.

N Model

N equals the amount of capacity required to power, backup, or cool a facility at full IT load. It can represent the units that you want to duplicate such as a generator, UPS, or cooling unit. For example, if a data center requires three UPS units to operate at full capacity, N would equal three.

An architecture of N means the facility is designed only to keep a data center running at full capacity. Simply put, N is the same as zero redundancy. If the data center facility is at full load and there is a hardware failure, scheduled maintenance, or an unexpected outage, mission-critical applications would suffer. With an N design, any interruption would leave your business unable to access your data until the issue is resolved.

N+1 or N+X Model

An N+1 redundancy model provides a minimal level of resiliency by adding a single component—a UPS, HVAC system or generator—to the N architecture to support a failure and maintain a full workload. When one system is offline, the extra component takes over the load. Going back to the previous example, if N equals three UPS units, N+1 provides four. Likewise, an N+2 redundancy design provides two extra components. In our example, N+1 provides five UPS units instead of four. So N+X provides N+X extra components to reduce risks in the event of multiple simultaneous failures.

2N Model

2N redundancy creates a mirror image of the original UPS, cooling system or generators to provide full fault tolerance. It means if three UPS units are necessary to support full capacity, this redundancy model would include an additional set of three UPS units, for a total of six systems. This design also utilizes two independent distribution systems.

With a 2N model, data center operators can take down an entire set of components for maintenance without affecting normal operations. Moreover, in the event of unscheduled multiple component failures, the additional set takes over to maintain full capacity. The resiliency of this model greatly cuts the risks of downtime.

2N+1 Model

If 2N means full fault tolerance, 2N+1 delivers the fully fault-tolerant 2N model plus an extra component for extra protection. Not only can this model withstand multiple component failures, even in a worst-case scenario when the entire primary system is offline, it can still sustain N+1 redundancy. For its high level of reliability, this redundancy model is generally used by businesses that cannot tolerate even minor service disruptions.

N+1 2N+1 redundancy

3N/2 Model

The three-to-make-two or 3N/2 redundant model refers to a redundancy methodology where additional capacity is based on the load of the system. If we consider a 3N/2 scenario, three power delivery systems will power two servers, which means each power delivery system utilizes 67% of the available capacity. Likewise, in a 4N/3, there will be four power delivery systems powering three workloads (three servers). The 3N/2 could be upgraded to 4N/3, but only in theory. This is because such an elaborate model has so many components that it would be very difficult to manage and balance loads to maintain redundancy.

3N/2 redundancy

What’s the Right One for You?

Choosing a redundant model that meets your business needs can be challenging. Finding the right balance between reliability and cost is the key. For businesses that require as little downtime as possible, higher levels of redundancy are justifiable in terms of profit and overall net growth. For those that do not, lower levels of redundancy are acceptable. They are cheaper and more energy-efficient than the other more sophisticated redundancy designs.

In a word, there’s no right or wrong redundancy model because it depends on a range of factors like your business goals, budget, and IT environment. Consult your data center provider or discuss with your IT team to figure out the best option for you.

Article Source: Understanding Data Center Redundancy

Related Articles:

What Are Data Center Tiers?

Data Center UPS: Deployments & Buying Guide

5 Factors to Consider for Data Center Environmental Monitoring

Share

What Are Data Center Environmental Standards

Data center environmental monitoring is vital for device operations. The data center architecture is divided into four layers where the equipment placed inside also affects the design of data center environmental standards.

  • Tier I defines data center standards for facilities with minimal redundancy.
  • Tier II provides redundant critical power and cooling components.
  • Tier III adds redundant transmission paths for power and cooling to redundant critical components.
  • Tier IV infrastructure is built on Tier Ⅲ and adds the concept of fault tolerance to infrastructure topology.

Enterprises must comply with fairly stringent environmental standards to ensure these facilities remain functional.

Evolution of Data Center Environmental Standards

As early as the 1970s and 1980s, data center environmental monitoring revolved around power facilities. For example, whether the environment where the power supply was located has proper isolation, whether the main power supply affected the operation of the overall equipment, but the cooling problem was rarely monitored. Some enterprises have explored cooling technologies to facilitate cooling in data centers, such as liquid cooling. Typically, enterprises used loud fans to control airflow. In some countries, the cost of electricity was high, so there was a greater emphasis on being able to supply enough electricity for a given system configuration.

In the 1990s, the power density of the rack became a considered issue of enterprise data center environmental standards. In the past, a simple power factor calculation could yield the required cooling value for a data center, but accurate cooling values could not be provided by the increasing rack densities. At this point, enterprises had to re-plan the airflow patterns of data center racks and equipment. This required IT managers to know more statistics when designing a data center, such as pressure drop, air velocity, and flow resistance.

By the early 20th century, power densities were still increasing, and thermal modeling was seen as a potential answer to optimizing the cooling of data center environments. The lack of necessary data implied that temperature data typically was collected after data center construction and then IT managers needed to make adjustments based on that information. Enterprises should choose the correct thermal model of equipment when building a data center to enhance data center environmental monitoring. Here are several environmental control methods when building a data center.

5 Factors in Data Center Environmental Controls

For ensuring the reliable operation of IT equipment within a rack, the primary concerns for monitoring and controlling data center environmental conditions are temperature, humidity, static electricity, physical and human safety. Moreover, data center environmental impact resulted from these factors not only on the ecological environment but also on data center security, energy efficiency, and the enterprise social image.

Temperature Control

Thermal control is always a challenging issue for data centers, as servers emit heat when they are running. If they are paralyzed by overheating, it will cripple data center operations. Temperature control can check if equipment is operating within the recommended temperature range. A temperature sensor is an effective method to solve temperature control. Placing them in strategic locations and reading the overall temperature allows IT managers to conduct temperature control promptly.

Humidity Control

Humidity control is closely related to temperature levels. High humidity can corrode hardware. Low humidity levels can cause electrostatic arcing problems. For this reason, cooling and ventilation systems need to detect and control the relative humidity in the room air. ASHRAE recommends operation within a dew point range of 41.9 to 59 degrees Fahrenheit with a maximum relative humidity of 60%. Datacenter designers need to invest in systems that can detect humidity and water near equipment to better monitor cooling fans and measure the presence of airflow during routine management. Of course, it is also possible to use a set of computer room air conditioner(CRAC) units on larger facilities to create consistent airflow that flows throughout the room. These CRAC systems typically work by drawing in and cooling heat, then expelling it as cool air through vents and air intakes leading to the server.

Electricity Monitoring

Static electricity is also one of the threats in the data center environment, it is an invisible nuisance. Some newer IT components can be damaged or completely fried by less than 25 volts of discharge. If this problem isn’t addressed, it might result in frequent disconnections, system crashes, and even data corruption. Unexpected bursts of energy in the form of electrostatic discharges may be the greatest threat to the performance of the average data center. To prevent such incidents, businesses must install energy monitors that are strategically located to detect the buildup of static electricity.

Fire Suppression

A comprehensive fire suppression system is a must-have feature in data center environmental standards. If an entire data center is to be protected from disaster, data center designers need to take security measures from fire and fire suppression systems to physical and virtual systems. Fire suppression systems are subject to regular testing and active monitoring of the data center to ensure that they will indeed do their job in the clutch.

Security Systems

Data security of data center environmental standards is also very important. IT departments must institute a limit that keeps intruders away from buildings as well as server rooms and the racks they are in. Setting up a complete range of physical security is a desirable method—from IP surveillance systems to advanced sensors. If unauthorized personnel is detected entering a building or server rack, it will alert data center managers.

Summary

The purpose of data center environmental monitoring is to provide a better operating environment for facilities and avoid some unplanned cases that affect the business of enterprises. For the above data center environmental controls, it is beneficial for enterprises to maintain data center security when designing data centers, which is conducive to data center management. Also, it properly controls the data center environmental impact on ecology and energy efficiency.

Article Source: 5 Factors to Consider for Data Center Environmental Monitoring

Related Articles:

Things You Should Know About Data Center Power

What Is Data Center Security?

What Is Edge Computing?

Share

In today’s data-driven world, where businesses rely on real-time information to make critical decisions, edge computing has become the technology of choice. By moving some portion of compute, and storage resources from a central data center and closer to the data source, latency issues, bandwidth limitations, and network disruptions are greatly minimized.

With edge computing, data produced in a factory floor or retail store are processed and analyzed at the network’s edge and within the premises. Since data doesn’t travel across networks, speed is one obvious advantage. This translates to instant analysis of data, faster response by site personnel, and real-time decision-making.

How Edge Computing Works

Edge computing brings computing power closer to the data source, where sensors and other data capturing instruments are located. The entire edge computing process takes place inside intelligent devices that speed up the processing of the various data collected before the devices connect to the IoT.

The goal of edge computing is to boost efficiency. Instead of sending all the data collected by sensors to the enterprise applications for processing, edge devices do the computing and only send important data for further analysis or storage. This is possible thanks to edge AI, i.e., artificial intelligence at the edge.

After the edge devices do the computation of the data with the help of edge AI, these devices group the data collected or results obtained into different categories. The three basic categories are:

  • Data that doesn’t need further action and shouldn’t be stored or transmitted to enterprise applications.
  • Data that should be retained for further analysis or record keeping.
  • Data that requires an immediate response.

The work of edge computing is to discriminate between these data sets and identify the level of response and the action required, then act on it accordingly.

edge computing

Depending on the compute power of the edge device and the complexity of the data collected, the device may work on the outlier data and provide a real-time response. Or send it to the enterprise application for further analysis in real-time with immediate retrieval of the results. Since only the important and urgent data sets are sent over the network, there’s reduced bandwidth requirement. This results in substantial cost savings, especially with wireless cellular networks.

Why Edge Computing?

There are several reasons why edge computing is winning the popularity battle in the enterprise computing world. Digital transformation initiatives, from robotics & advanced automation to AI and data analytics, all have one thing in common – they are largely data-dependent. Most industries that leverage these technologies are also time-sensitive, meaning the data they produce becomes irrelevant in a matter of minutes, if not seconds.

The large amounts of data currently produced by IoT devices strain a shared computing 8model due to system congestion and network disruption. This results in huge financial losses, injuries, and costly damages for time and disruption-sensitive applications. The attractiveness of edge computing often narrows down to the three network challenges it seeks to solve. These are:

Latency – a lag in the communication between devices and network delays decision-making in time-sensitive applications. Edge computing solves this problem using a more distributed network, which ensures there’s no disconnect in real-time information transfer and processing. This gives a more reliable and consistent network.

Bandwidth – Every network has a limited bandwidth, especially wireless communications. Edge computing solves bandwidth limitations by processing immense volumes of data near the network’s edge then only sending the most relevant information through the network. This minimizes the volume of data that requires a cellular connection.

Data Compliance and Governance – organizations that handle sensitive data, are subject to data regulations of various countries. By processing this set of data near the source, these companies can keep the sensitive customer/employee data within their borders, hence ensuring compliance.

Edge Computing Use Cases

Over the years, edge data centers have found several use cases across industries, thanks to rapid tech adoption and the benefits of processing data at the network edge. Ideally, any application that requires moving large amounts of data to a centralized data center before retrieving the result and insights could benefit a lot from edge computing. Below are the different ways several industries use edge computing in their day-to-day operations:

Transportation – autonomous vehicles produce around 5 to 20 terabytes of data daily from information about speed, location, traffic conditions, road conditions, etc. This data must be organized, processed, and analyzed in real-time, and insights fed into the system while the vehicle is on the road. This time-sensitive application requires accurate, reliable, and consistent onboard computing.

Manufacturing – several manufacturers now deploy edge computing to monitor manufacturing processes and enable real-time analytics. By coupling this with machine learning and AI, edge computing can help streamline manufacturing processes with real-time insights, predictive analytics, and more.

Farming – indoor farming relies on different sensors that collect a wide range of data that must be processed and analyzed to gain insights into the crops’ health, weather conditions, nutrient density, etc. Edge computing makes this data processing and insights generation faster, hence faster response and decision making.

The other areas where edge computing has been adopted include healthcare facilities to help patients avoid health issues in real-time and retail to optimize vendor ordering and predict sales.

Edge Computing Challenges

Edge computing isn’t without its challenges, and some of the common ones revolve around security and data lifecycles. Applications that rely on IoT devices are vulnerable to data breaches, which could comprise security at the edge. As far as data lifecycles are concerned, the challenge comes in with the large amount of data stored at the network’s edge. A ton of useless data may take up critical space; hence businesses should keenly choose the data to keep and discard.

Edge computing also relies on some level of connectivity, and the typical network limitations are another cause for concern. It’s, therefore, necessary to plan for connectivity problems and design an edge computing deployment that can accommodate common networking issues.

Implementing Edge Computing

Regardless of the industry you are in, edge computing comes with several benefits, but only if it’s designed well and deployed to solve the challenges common with centralized data centers. To get the most from your investment, you want to work with a reputed edge computing company or an expert IT consultant to guide you on the best way forward.

Article Source: What Is Edge Computing?

Related Articles:

Edge Computing vs. Multi-Access Edge Computing

Micro Data Center and Edge Computing

Why Green Data Center Matters

Share

Background

Green data centers appear in the concept of enterprise construction, due to the continuous growth of new data storage requirements and the steady enhancement of green environmental protection awareness. Newly retained data must be protected, cooled, and transferred efficiently. This means that the huge energy demands of data centers present challenges in terms of cost and sustainability, and enterprises are increasingly concerned about the energy demands of their data centers. It can be seen that sustainable and renewable energy resources have become the development trend of green data centers.

Green Data Center Is a Trend

A green data center is a facility similar to a regular data center that hosts servers to store, manage, and disseminate data. It is designed to minimize environmental impact by providing maximum energy efficiency. Green data centers have the same characteristics as typical data centers, but the internal system settings and technologies can effectively reduce energy consumption and carbon footprints for enterprises.

The internal construction of a green data center requires the support of a series of services, such as cloud services, cable TV services, Internet services, colocation services, and data protection security services. Of course, many enterprises or carriers have equipped their data centers with cloud services. Some enterprises may also need to rely on other carriers to provide Internet and related services.

According to market trends, the global green data center market is worth around $59.32 billion in 2021 and is expected to grow at a CAGR of 23.5% in the future to 2026. It also shows that the transition to renewable energy sources is accelerating because of the growth of green data centers.

As the growing demand for data storage drives the modernization of data centers, it also places higher demands on power and cooling systems. On the one hand, data centers need to convert non-renewable energy into electricity to generate electricity, resulting in rising electricity costs; on the other hand, some enterprises need to complete the construction of cooling facilities and server cleaning through a lot of water, all of which are ample opportunities for the green data center market. For example, Facebook and Amazon continue to expand their businesses, which has also increased the need for data storage of global companies. These enterprises need a lot of data to complete the analysis of potential customers, but these data processing needs will require a lot of energy. Therefore, the realization of green data centers has become an urgent need for enterprises to solve these problems, and this can also bring more other benefits to enterprises.

Green Data Center Benefits

The green data center concept has grown rapidly in the process of enterprise data center development. Many businesses prefer alternative energy solutions for their data centers, which can bring many benefits to the business. The benefits of green data centers are as follows.

Energy Saving

Green data centers are designed not only to conserve energy, but also to reduce the need for expensive infrastructure to handle cooling and power needs. Sustainable or renewable energy is an abundant and reliable source of energy that can significantly reduce power usage efficiency (PUE). The reduction of PUE enables enterprises to use electricity more efficiently. Green data centers can also use colocation services to decrease server usage, lower water consumption, and reduce the cost of corporate cooling systems.

Cost Reduction

Green data centers use renewable energy to reduce power consumption and business costs through the latest technologies. Shutting down servers that are being upgraded or managed can also help reduce energy consumption at the facility and control operating costs.

Environmental Sustainability

Green data centers can reduce the environmental impact of computing hardware, thereby creating data center sustainability. The ever-increasing technological development requires the use of new equipment and technologies in modern data centers, and the power consumption of these new server devices and virtualization technologies reduces energy consumption, which is environmentally sustainable and brings economic benefits to data center operators.

Enterprise Social Image Enhancement

Today, users are increasingly interested in solving environmental problems. Green data center services help businesses resolve these issues quickly without compromising performance. Many customers already see responsible business conduct as a value proposition. Enterprises, by meeting compliance, regulatory requirements and regulations of the corresponding regions through the construction of green data centers, improve the image of their own social status.

Reasonable Use of Resources

In an environmentally friendly way, green data centers can allow enterprises to make better use of various resources such as electricity, physical space, and heat, integrating the internal facilities of the data center. It promotes the efficient operation of the data center while achieving rational utilization of resources.

5 Ways to Create a Green Data Center

After talking about the benefits of a green data center, then how to build a green data center. Here are a series of green data center solutions.

  • Virtualization extension: Enterprises can build a virtualized computer system with the help of virtualization technology, and run multiple applications and operating systems through fewer servers, thereby realizing the construction of green data centers.
  • Renewable energy utilization: Enterprises can opt for solar panels, wind turbines or hydroelectric plants that can generate energy to power backup generators without any harm to the environment.
  • Enter eco mode: Using an Alternating current USPs is one way to switch eco mode. This setup can significantly improve data center efficiency and PUE. Alternatively, enterprises can reuse equipment, which not only saves money, but also eliminates unnecessary emissions from seeping into the atmosphere.
  • Optimized cooling: Data center infrastructure managers can introduce simple and implementable cooling solutions, such as deploying hot aisle/cold aisle configurations. Data centers can further accelerate cooling output by investing in air handlers and coolers, and installing economizers that draw outside air from the natural environment to build green data center cooling systems.
  • DCIM and BMS systems: DCIM software and BMS software can help data centers managers identify and document ways to use more efficient energy, helping data centers become more efficient and achieve sustainability goals.

Conclusion

Data center sustainability means reducing energy/water consumption and carbon emissions to offset increased computing and mobile device usage to keep business running smoothly. The development of green data centers has become an imperative development trend, and it also caters to the green goals of global environmental protection. As a beneficiary, enterprises can not only save operating costs, but also effectively reduce energy consumption. This is also an important reason for the construction of green data centers.

Article Source: Why Green Data Center Matters

Related Articles:

Data Center Infrastructure Basics and Management Solutions

What Is a Data Center?

Gigabit Slots: SFP Port vs. RJ45 Port vs. GBIC Port

Share

In Gigabit Ethernet applications, either SFP port, RJ45 port or GBIC port is used in different Gigabit devices, such as switches, routers, servers and storages. And some latest wireless access points (APs) are also equipped with SFP port. Since all these three port types only support 1Gbit and do not go any higher, why are they created instead of using only one type?

RJ45 port and SFP port in an Ethernet switch

SFP Port vs. GBIC Port: An Improvement in Dimension

SFP port and GBIC port can be found in a variety of equipment, including Ethernet switches, routers, network interface cards, servers, etc. Today most Ethernet switches are designed with as least one or two Gigabit SFP uplink slots. What is SFP port? As the name implies, SFP port is intended for taking in SFP (mini-GBIC) fiber modules with small form-factor (SFF) connectors, while GBIC port is for accepting GBIC modules.

GBIC module and SFP module

The two types of ports can provide the same data rates and same distances in Gigabit applications, but the same number of SFP ports uses less space than that of GBIC ports. Since they have the equal functionality, SFP has gradually taken the place of the older GBIC in Gigabit networking for space-saving and economical reasons. The table below is a comparison of SFP port vs. GBIC port.

Parameter SFP Port GBIC Port
Supported Optical Modules SFP transceiver (single-mode/multimode, simple/duplex, CWDM/DWDM) GBIC transceiver (single-mode/multimode, simple/duplex, CWDM/DWDM)
Transceiver Receptacle Type LC, RJ45 SC, RJ45
Supported Standards 1000BASE-T, 1000BASE-SX, 1000BASE-LX, 1000BASE-LX10, 1000BASE-LX/LH, 1000BASE-LH, 1000BASE-EX, 1000BASE-ZX, 1000BASE-BX, 1000BASE-CWDM, 1000BASE-DWDM.
Supported Distances Up to 100 m, 300 m, 2 km, 10 km, 20 km, 40 km, 80 km, 100 km, 150 km.
Module DOM Function Modules support digital optical monitoring (DOM) function, providing real-time monitoring of transceiver’s operation temperature, optical input, optical output, laser bias current and supply voltage.

If you want to know more about the SFP types that the SFP port support and their detailed specifications, such as wavelengths and distances, you can read: How Many Types of SFP Transceivers Do You Know.

SFP vs. RJ45 port: Distance Makes Difference

The RJ45 ports in Gigabit networking devices follow the 1000BASE-T Ethernet standard. They only support twisted pairs for network connections, and the distance is limited to 100 m (330 feet). RJ45 port uses Category 5/5e/6 and higher level copper Ethernet cables for 1Gbit transmission. Compared with switch using only RJ45 port, SFP port switch supports more types of communication cables and longer reaches of links. Here are the differences between SFP vs. RJ45 port.

Parameter SFP Port RJ45 Port
Connection Cable Types Multimode fiber cable, single-mode fiber cable, twisted pairs (Cat5, Cat6 or higher). Twisted pairs (Cat5, Cat6 or higher).
Max. Transmission Distance MMF (550 m), SMF (150 km), Cat5 (100 m) 100 m (330 ft)
Data Rate 1000 Mbps (1G) 1000 Mbps (1G)

Since most end-points at the customer’s are still using RJ45 port, in order to retaining the convenience of RJ45 port while providing the advantage of SFP network to customers, some switches include combo SFP/RJ45 ports. So users can choose to utilize either the SFP port or the RJ45 port at a time for short-distance connections.

What Is Dual-Rate SFP Port?

Unlike the combo SFP/RJ45 port which is actually a link shared by two different ports, a dual-rate SFP port is one single SFP slot that can be configured to support two different data rates. Generally a dual-rate SFP slot can set to be either 1G mode or 10G mode, i.e., an SFP module or an SFP+ module can be installed in it. But the actual data rate of the transmission depends on the mode being settled and the transceiver module being used. There are several cases:

  • The dual-rate SFP slot is 10G activated, and an SFP+ module is installed; the interface is in 10G mode.
  • The port is 10G activated, but an SFP module is installed; the interface is in 1G mode.
  • The port is not 10G activated, but an SFP+ module is installed; the interface will be link down state.
  • The port is not 10G activated, and an SFP module is installed; the interface is in 1G mode.

Summary

The three port types—SFP port, RJ45 port and GBIC port—are used in different networking equipment. The SFP port is now more commonly used than GBIC port since the former one provides the same function and is more compact. When SFP port vs. RJ45 port, SFP port supports connectivity through varieties of fiber cables and the copper twisted pairs, and a wide range of link distances, but RJ45 port accepts only twisted pair cables and a shorter distance. Since every thing has both two sides, it is not always the best to choose SFP slot. The two articles: GBIC vs SFP: When It’s Best to Use GBIC and When to Use SFP, and RJ45 vs SFP: Which Should I Use to Connect Two Switches, specifically discussed when to use which port/module type.


PoE Switch VS. PoE+ Switch, Which Will You Choose?

Share

Many people choose to install POE (Power over Ethernet) switches for enterprise or home use. PoE technology is time-saving and money-saving for cable installation by carrying electrical power over network cable. In the market, if you are careful enough you can find some are marked as PoE switch and some are PoE+ switch. What’s the difference between these two switch? Can we use PoE switch vs. PoE+ switch? Find answers in the following article.

IEEE802.3af (PoE) and IEEE802.3at (PoE+) Standards

To answer this question , we have to learn these IEEE802.3af (PoE) and IEEE802.3at (PoE+) standards. IEEE802.3af standard was established in 2003, upgraded to IEEE802.3at in 2009. IEEE802.3af standard supports the network devices that require up to 13 W of electrical power. But it can’t meet the power needs of many devices. Therefore, IEEE802.3at standard was promoted to offer more power for high-power PoE devices, for instance, cameras with high-power IR illuminators, IP telephones, small network printers etc. The big difference between IEEE802.3af and IEEE802.3at is the maximum amount of power they provide over Cat5 cable. The maximum amount of power for the 802.3af standard is 15.4 W and the maximum for the 802.3at standard is 30 W. PoE switch and PoE+ switch are designed based on different IEEE standards.

PoE Switch VS. PoE+ Switch: What’s the Difference?

Let’s compare these switches: S1130-8T2F and S1250-8T2F. S1130-8T2F is an 8 port PoE switch with 2 SFP ports. S1250-8T2F is an 8 ports Gigabit PoE+ managed switch with 2 SFP ports. At first glance, these two switches look the same. Actually one is PoE switch and the other is PoE+ switch.

PoE switch vs.poe+ switch

Obviously, by comparison of above two types of switches, switching capacity, RJ45 ports and SFP+ ports, all are the same, except max. power consumption. The max. power consumption of S1130-8T2F is 130W while the S1250-8T2F is 250W. Max. power consumption of PoE switch is smaller than that of PoE+ switches.

PoE Switch v PoE+ Switch? Which One Should We Choose?

Many users may encounter this problem. Should we choose PoE switch or PoE+ switch? PoE network switch is cheaper. But PoE+ switch have some advantages over PoE switch.

  • More electrical power–PoE+ switch can provide 30W, nearly half of the electrical power to powered devices more than PoE switch.
  • Smart power budgeting–PoE+ switch includes scope for power sources and powered devices to communicate with each other to negotiate an allowance of electrical power.
  • PoE+ switch can support a more complete range of network equipment, including IP cameras with heater/blowers, and multichannel wireless access points.

To help you make better decision, I’ll take two types of 24-port switches as an example: S1400-24T4F PoE switch and S1600-24T4F PoE+ switch.

S1400-24T4F S1600-24T4F
24-port-poe-switch 24-port-poe+ switch
S1400-24T4F is a 24-port Gigabit PoE managed switch with 4 SFP ports, 400W. It’s compliant with IEEE 802.3af/at. It’s a good solution for SMB or entry-level enterprise which demands industrial, surveillance, IP Phone, IP Camera or Wireless applications. S1600-24T4F is a 24-port Gigabit PoE+ managed switch with 4 ports, 600W. It’s compliant with IEEE 802.3af/at, supporting the connection to VoIP phones, wireless APs and IP surveillance cameras for intelligent switching and networks growth. S1400-24T4F and S1600-24T4F support both 802.3af/at standards. They can fully allocate IEEE802.3af powered devices, and half allocate the IEEE802.3at powered devices. There aren’t too many differences between the two switches. S1400-24T4F is cheaper and S1600-24T4F provide more electrical power. Well, to choose S1400-24T4F (PoE switch) or S1600-24T4F (PoE+ switch), the decision depends on your needs and budgets. PoE switch is still employed by many users.

Can We Connect PoE Access Point with a PoE+ Switch?

PoE+ is compatible with PoE. POE+ switches can recognize 802.3af powered devices and enable PoE to them as normal. PoE+ powered devices can also be connected to 802.3af PoE switches, and are supposed to restrict how much power they use accordingly. As the above show, S1400-24T4F PoE switch and S1600-24T4F PoE+ switch are compatible with 802.3af/at standards. So can we connect a PoE access point with a PoE+ switch? The answer is yes. PoE Switch VS. PoE+ Switch: Do you get the difference between them now? Hope this article is helpful.

Related Article:PoE Switch vs. PoE Injector vs. PoE Splitter

One LC Switchable Uniboot Cable Removes All Your Problem

Share

Suppose that you need some LC fiber patch cords to build a data center and now there are common LC patch cable and LC switchable unibboot cable. What kind of cable will you choose? Considering the cost, you may select common LC patch cords. But think about that for a while, you may make the different choice. Let’s see why you need LC switchable uniboot cable.

LC Switchable Uniboot Cable: Switch Polarity Easily and Quickly

Have you ever changed polarity when running fiber patch cords? How long did you spend on finishing that? Did you make any mistake? Polarity is quite a complicated thing. Engineers must be very careful to ensure that the transmit signal (Tx) at one side should match the corresponding receiver (Rx) at the other side. If polarities don’t match, signal transmission may get impacted. Then how to do polarity conversion in an easy and quick way? LC switchable uniboot cable is your option.

duplex-cords-polarity

LC Switchable Uniboot Cable Help You Do Easy Polarity Conversion

We all know that polarity conversion of traditional LC cables would require to re-terminate connector. It takes much time and leads mistakes. What’s more, you need special tool to finish that process. Unlike common LC patch cords, this switchable uniboot cable supports the polarity to be switched without connector re-termination. Polarity changes can be made in the field quickly, without the use of tools, to the correct fiber mapping polarity. No damage to fiber.

lc switchable uniboot cable

How to Reverse the Polarity of LC Switchable Uniboot Cable?

To reverse polarity, you just need three steps. First, open the connector top. Second, switch the polarity you need. At last, close the connector top. No tools are needed during the whole process. Isn’t the LC switchable uniboot cable a good choice?

lc-switchable-uniboot-polarity

LC Switchable Uniboot Cable Is Good for Saving Space and Cooling

Except the polarity issue, engineers may come across the problem of limited space and airflow when building infrastructure in high density data center. How to deal with that? Switchable uniboot cable is absolutely a good choice. The cable is designed as round style of 2mm jacked cables instead of zipcord duplex cables, reducing cable size and increasing 60% density. So switchable uniboot cable is beneficial for saving space and cooling.

lc-uniboot-vs-standard

What Should Notice When Buying LC Switchable Uniboot Cable?

LC switchable uniboot cable is helpful to simplify switching polarity, save space, which is an ideal solution for high density data center. In case that you need, here are some suggestions for you When buying LC switchable unibboot cable.

  • 1. Connector is an essential factor. If the connector has a poor quality, it can cause optical loss. You are suggested to choose Senko connector which is famous for its high quality.
  • 2. For patch cords, the other important factor is fiber. As we know bending sometimes inevitable. Corning fiber is outstanding for its bending insensitivity. For single mode fiber, the minimum bend radius is 10mm. For multimode fiber, the minimum bend radius is 7.5mm.
  • 3. Pay attention to insertion loss (IL) and return loss (RL) for single mode and multimode cable.
  • 4. Find if the cable meets various standards, for instance, CE, IEC, ROSH, EIA/TIA, Telecocordia GR-326-CORE standards.
  • 5. LC uniboot patch cable price is of course another point you care much. So find several uniboot lc cable manufacturers to compare the price and choose the one you accept.

Related Article: LC Fiber Connector, Adapter and Cable Assemblies

Change Polarity in a Hurry? Try Polarity Switchable LC Cable!