Tag Archives: data center

Differences Between Cloud Computing and Data Center


Many people may be confused about what is cloud computing and what is data center. They often ask questions like, “Is a cloud a data center?”, “Is a data center a cloud?” or “Are cloud and data center two completely different things?” Maybe you know your company needs the cloud and a date center. And you also know your data center needs the cloud and vice versa. But you just don’t know why! Don’t worry. This essay will help you have a thorough understanding of the two terms and tell you how they differ from each other. Let’s begin with their definition first.

What Is Data Center and Cloud Computing?
Difference between Cloud Computing and Data Center

The term “data center” can be interpreted in a few different ways. First, an organization can run an in-house data center maintained by trained IT employees whose job is to keep the system up and running. Second, it can refer to an offsite storage center that consists of servers and other equipment needed to keep the stored data accessible both virtually and physically.

While the term “cloud computing” didn’t exist before the advent of Internet. Cloud computing changes the way businesses work. Rather than storing data locally on individual computers or a company’s network, cloud computing entails the delivery of data and shared resources via a secure and centralized remote platform. Rather than using a company’s own servers, it places its resources in the hands of a third-party organization that offers such a service.

Cloud Computing VS. Data Center in Security
Cloud Computing VS. Data Center in Security

Since the cloud is an external form of computing, it may be less secure or require more work to ensure security than a data center. Unlike data centers, where you are responsible for your own security, you will be entrusting your data to a third-party provider that may or may not have the most up-to-date security certifications. If your cloud are placed on several data centers in different locations, each location will also need the proper measures to ensure the security.

A data center is also physically connected to a local network, which makes it easier to ensure that only those with company-approved credentials and equipment can access stored apps and information. The cloud, however, is accessible by anyone with the proper credentials anywhere that there is an Internet connection. This opens a wide array of entry and exit points, all of which need to be protected to make sure that data transmitted to and from these points are secure.

Cloud Computing VS. Data Center in Cost
Cloud Computing VS. Data Center in Cost

For most small businesses, cloud computing is a more cost-effective option than a data center. Because when you chose a data center, you have to build an infrastructure from the start and will be responsible for your own maintenance and administration. Besides, a data center takes much longer to get started and can cost businesses $10 million to $25 million per year to operate and maintain.

Unlike a data center, cloud computing does not require time or capital to get up and running. Instead, most cloud computing providers offer a range of affordable subscription plans to meet customers’ budget and scale the service to their actual needs. And data centers take time to build,  whereas cloud services are available for use almost immediately after registration.


Going forward, cloud computing services will become increasingly attractive with a low cost and convenient service. It creates a new way to facilitate collaboration and information access across great geographic distances while reducing the costs. Therefore, compared with the traditional data center, the future of cloud computing is definitely much brighter.

Three Types MTP Harness Cables Used in Today’s Data Center

As we know, harness cables are generally used to connect high-density switches with LC serial transceivers installed. The transition harness connects to the pre-installed MTP backbone trunk cable and then furcates to LC connectors entering the switch. This kind of MTP-LC harness cables are usually supplied in short lengths because they are normally only used for “in-rack” connections. Transition harnesses are available for Base-8, 12 and 24 backbones and the LC tails are numbered for clear port identification and traceability.

MTP Harness Cable

Application Scene

MTP-LC harness cables application

Another harness cable type is conversion harness cables, which allow users to convert their existing MTP backbone cables to an MTP type which matches their active equipment. Conversion harnesses are a low-loss alternative to conversion modules because they eliminate one mated MTP pair across the link. Many of today’s legacy infrastructures are built using a Base-12 MTP backbone design, however experience shows us that this connector is rarely used on higher data rate switches or servers. Currently Base-8 is the preferred connector for 40G (SR4) transceivers and Base-24 is the preferred connector for 100G transceivers (SR10).

MTP Harness Cable

The final type of harness cable is MTP trunk harness cables. MTP trunk harness cables are high density multi-stranded cables which form the backbone of the data center. This kind of trunk harness cables are available in different fiber-counts up to 144 fibers, which reduce the installation time by consolidating multiple sub-units into a single cable. This approach significantly reduces the overall diameter of the cable and provides much better space utilization of cable routing channels. Just as two types harness cables mentioned, the MTP trunk harness cables are also available with 8, 12 and 24 fiber sub-units so that users can deploy Base-8, Base-12 or Base-24 infrastructures to suit their MTP connectivity requirements.

MTP harness cable

Conversion and Trunk harness Cable Application


Upgrade to 40G / 100G Networks with High-Density Fiber Enclosures

1The migration to 40G/100G networks will drive a number of changes within the data center and pave the way for the ever-increasing bandwidth needs of cloud computing, web services and virtualized applications. At the top of the list of these changes is the considerable challenge of cabling and cable management in a high-density computing environment. As we know, the cable plant for 40G/100G is different from 10G networks, leading to an overabundance of fan-out cables with different legs and connectors to meet the needs of devices of varying speeds. Without some type of convenient patching solution, the result will be a cluster of cables that make it difficult to install, maintain and upgrade network equipment.

High-density fiber enclosures can connect different generations of equipment such as 10Gb, 40Gb and 100Gb in a simple panel-cassette system. No tools are required to install the cassette in the panel enclosure. Each cassette features factory terminated connectors that reduce the time and labor required of field connector terminations. In a word, high-density fiber enclosures can simplify cabling systems for your data center projects and make your fiber systems easy to order and easy to install.


How Does It Work?
There are 4 MTP HD cassettes loaded into a rack enclosure. It consolidates all the high-bandwidth connections to a single point. Then, you can simply patch the 40G MTP cables at the back and the standard LC cables to devices in the front of the cassette. Every MTP HD cassette is loaded with 12 LC duplex connector on the front side and 2 MTP-12 connector at the rear. Then the high-density fiber enclosure is loaded with 48 LC duplex (96 fibers) connector on the front side and 8 MTP-12 connector at the rear. As a result, you can achieve 320G in a 1 1RU rack mount, which provides the highest fiber densities and port counts in the industry contributing to maximizing rack space utilization and minimizing floor space. Without this solution, the IT staff would have to pull a new fan-out cable each time they needed a new connection. Modular cassettes allow you to expand as you need to accommodate the necessary bandwidth and connectors.


FS.COM High-Density Fiber Enclosure Solution is ideal  for consolidating dozens of fiber cable runs into an easy and manageable, high-density, patching system. It’s easy to manage, easy to grow as your needs expand, and most importantly, easy to convert when you add 40/100G switches, giving you the flexibility to adapt as your technology changes.

Data Center Upgrade — Who Should Be Responsible for Buying Transceivers?

There was a time that cable products specifically associated with hardware OEMs. If a company was buying or using one of these vendors’ products, the matching cables also had to be used. Therefore, whoever was responsible for managing the hardware was also responsible for the cabling used to connect the devices together. Then, the structured cabling industry replaced this. The cabling infrastructure is now viewed as an independent asset separate from the IT hardware. This has allowed companies to make purchasing decisions for IT and cabling without the concern of each other. But this may be a problem. To understand the problem, let’s understand the LAN network operation principles first.


The OSI Model of LAN Network
As we know, the operation of local area networking (LAN) was defined with the Open Systems Interconnection Reference Model (OSI Model). The OSI Model defined seven layers of operation. By using the model, the industry could develop networking functions in a modular fashion and still ensure interoperability. The bottom of the stack is Layer 1, the Physical Layer. Layer 1 includes the cabling that is used to connect the various pieces of equipment together so that the data can be transported. The next step up on the stack is Layer 2, the Data Link Layer. Layer 2 provides for addressing and switching, so that the data can be sent to the appropriate destination. Layer 3 is the Network Layer, where data can be routed to another network. Layers 4 through 7 deal with software implementations.

OSI Model
The OSI Model meant that an end-user could purchase software (Layer 7) and expect it to work on multiple vendors’ hardware (Layer 2). And the hardware could be connected using multiple vendors (Layer 1). Structured cabling now had a home within Layer 1. Then this module leads to division of responsibility, for cabling versus network design specifications. The end-user ended up having “cabling people” and “networking people” on their staff. Each group of people used their own set of vendors and supply chains to specify and source their materials. And they each only needed a very basic understanding of what the other people were doing. This system has worked very well for the enterprise LAN. So what’s the problem?

What Is the Problem?
In the 1990s, copper cable was widely used in data center cabling deployment. As time went on, optical fiber cable was added. In fiber switches, it is common to use pluggable transceivers. This is done for a variety of reasons, but one is cost. Even though a transceiver is plugged into a switch, it is part of the OSI Model’s Layer 1, the Physical Layer. Additionally, most of the transceiver is part of the Physical Media Dependent (PMD) portion of Layer 1, as illustrated here. This means that the transceiver and the cable types must match.

transceiver Physical Media Dependent
However, unlike copper, there was never a fixed standard on the connector type or channel distance. Fiber may have many different standards and connector options. With multiple fiber types, multiple operating wavelengths, and multiple connectivity options, the number of solutions seemed limitless. Since the transceiver is physically plugged into the switch, it has always been considered the networking group’s responsibility. “Networking people” are responsible for buying transceivers and “cabling cable” are responsible for buying cabling products, then this causes the problem. Let’s take the following real-life case for example.

Real-life Case and Solution
Company A has a data center. Marsha is the facilities manager and is responsible for the data cabling. She has designed a cabling plan that has migrated from 1G into 10G. Anticipating the 40G requirements defined by IEEE 802.3ba (40GBase-SR4), she used a cassette-based platform to allow for the transition from LC connectivity of 10G to the MPO connectivity of 40G. Greg is the network manager. As the migration to 40G switches was about to commence, his hardware vendor recommended that they change to a new unique transceiver solution that used LC connectivity. This appeared to be a great idea because it would mean that Marsha would not have to change any of her connectivity. However, he did not consult with Marsha, because the hardware decisions are his to make. When the 40G switches arrived, Marsha was surprised by the connectivity choice because it limited her power budget. So this division causes the problem.

data center transceiver
Greg needs to have a 40G connection from Rack A to Rack B. From a Layer 2/3 perspective, that is all that matters. He still has the responsibility and complete control to define his needs and select equipment vendors for things like switches, routers, servers, etc. Instead of defining the form of the data rate, he simply specifies the speed. By shifting the single component (pluggable transceiver) from Greg to Marsha, the organization can make its decision much more efficiently. Greg does not have to worry about the variety of fiber and transceiver options, nor the impacts that they have on each other. And Marsha can manage the entire optical link, from transceiver to transceiver, which is all within Layer 1. Her experience with fiber and connectivity options puts her in a better position to determine which transceiver options are the most appropriate.

Looking back, the onset of structured cabling separated the cabling purchasing from the IT hardware purchasing. Looking at present-day and into the future, rapidly increasing data rates, especially in the data center are requiring another shift in the way we conduct business. By redefining the link to include not only cabling and connectivity, but also the transceiver, we put Layer 1 performance in the hands of the people most familiar with it. FS.COM provides a full range of transceivers and matched cabling products with the most cost-effective price. Aimed at offering a high performance-price ratio solutions for you.

SMF or MMF? Which Is the Right Choice for Data Center Cabling?

Selecting the right cabling plant for data center connectivity is critically important. The wrong decision could leave a data center incapable of supporting future growth, requiring an extremely costly cable plant upgrade to move to higher speeds. In the past, due to high cost of single-mode fiber (SMF), multimode fiber (MMF) has been widely and successfully deployed in data center for many years. However, as technologies have evolved, the difference in price between SMF and MMF transceivers has been largely negated. With cost no longer the dominant decision criterion, operators can make architectural decisions based on performance. Under these circumstances, should we choose SMF or MMF? This article may give you some advice.

MMF Can’t Reach the High Bandwidth-Distance Needs
MMF datacenterBased on fiber construction MMF has different classifications types that are used to determine what optical signal rates are supported over what distances. Many data center operators who deployed MMF OM1/OM2 fiber a few years ago are now realizing that the older MMF does not support higher transmit rates like 40GbE and 100GbE. As a result, some MMF users have been forced to add later-generation OM3 and OM4 fiber to support standards-based 40GbE and 100GbE interfaces. However, MMF’s physical limitations mean that as data traffic grows and interconnectivity speeds increase, the distance between connections must decrease. The only alternative in an MMF world is to deploy more fibers in parallel to support more traffic. Therefore, while MMF cabling has been widely and successfully deployed for generations, its limitations now become even more serious. Operators must weigh unexpected cabling costs against a network incapable of supporting new services.

SMF Maybe a Viable Alternative
Previously, organizations were reluctant to implement SMF inside the data center due to the cost of the pluggable optics required, especially compared to MMF. However, newer silicon technologies and manufacturing innovations are driving down the cost of SMF pluggable optics. Transceivers with Fabry-Perot edge emitting lasers (single-mode) are now comparable in price and power dissipation to VCSEL (multimode) transceivers. Besides, Where MMF cable plants introduce a capacity-reach tradeoff, SMF eliminates network bandwidth constraints. This allows operators to take advantage of higher-bit-rate interfaces and wave division multiplexing (WDM) technology to increase by three orders of magnitude the amount of traffic that the fiber plant can support over longer distances. All these factors make SMF a more viable option for high-speed deployments in data centers.

SMF datacenter

Comparison Between SMF and MMF
10GbE has become the predominant interconnectivity interface in large data centers, with 40GbE and 100GbE playing roles in some high-bandwidth applications. Put simply, the necessity for fiber cabling that supports higher bit rates over extended distances is here today. With that in mind, the most significant difference between SMF and MMF is that SMF provides a higher spectral efficiency than MMF, which means it supports more traffic over a single fiber using more channels at higher speeds. This is in stark contrast to MMF, where cabling support for higher bit rates is limited by its large core size. This effectively limits the distance higher speed signals can travel over MMF fiber. In fact, in most cases, currently deployed MMF cabling is unable to support higher speeds over the same distance as lower-speed signals.

Name Interface FP (SMF) VCSEL (MMF)
Link Budget (dB)
4 to 6 2
Reach (in meters) (Higher value is better)
10GbE 1300 300
40GbE 1300 150
100GbE 1300 <100

As operators consider their cabling options, the tradeoff between capacity and reach is important. Network operators must assess the extent to which they believe their data centers are going to grow. For environments where users, applications, and corresponding workload are all increasing, single-mode fiber offers the best future proofing for performance and scalability. And because of fundamental changes in how transceivers are manufactured, those benefits can be attained at prices comparable to SMF’s lower performing alternative.


WBMMF – Next Generation Duplex Multimode Fiber in the Data Center

Enterprise data center and cloud operators use multimode fiber for most of their deployments because it offers the lowest cost means of transporting high data rates for distances aligned with the needs of these environments. The connections typically run at 10G over a duplex multimode fiber pair—one transmit (Tx) fiber and one receive (Rx) fiber. Upgrading to 40G and 100G using MMF has traditionally required the use of parallel ribbons of fiber. While parallel transmission is simple and effective, continuation of this trend drives higher cost into the cabling system. However, a new generation of multimode fiber called WBMMF (wideband multimode fiber) is on the way, which can enable transmission of 40G or 100G over a single pair of fibers rather than the four or ten pairs used today. Now, let’s get close to WBMMF.

What Is Wideband Multimode Fiber?
WBMMF is a new multimode fiber type under development that will extend the ability of conventional OM4 multimode fiber to support multiple wavelengths. Unlike traditional multimode fiber, which supports transmission at the single wavelength of 850 nm, WBMMF will support traffic over a range of wavelengths from 850 to 950 nm. This capability will enable multiple lanes of traffic over the same strand of fiber to transmit 40G and 100G over a single pair of fibers and to drastically increase the capacity of parallel-fiber infrastructure, opening the door to 4-pair 400GE and terabit applications. Multimode fiber continues to provide the most cost-effective platform for high bandwidth connectivity in the data center, and with the launch of the WBMMF solution, that platform has been extended to support higher speeds with fewer fibers and at greater distances.

Wideband Multimode Fiber

What Is the Technology Behind WBMMF?
WBMMF uses short wavelength division multiplexing (SWDM) to significantly increase its transmission capacity by four times. WDM technology is well known for its use in single-mode transmission, but has only recently been adapted for use with vertical cavity surface-emitting lasers (VCSELs), which have been proven in high-speed optical communications and are widely deployed in 10G interconnection applications. SWDM multiplexes different wavelengths onto duplex MMF utilizing WDM VCSEL technology. By simultaneously transmitting four VCSELs, each operating at a slightly different wavelength, a single pair WBMMF can reliably transfer 40G (4x10G) or 100G (4x25G). The use of SWDM then enables WBMMF to maintain the cost advantage of multimode fiber systems over single-mode fiber in short links and greatly increases the total link capacity in a multimode fiber link.


Why Does WBMMF Make Sense?
In order to increase transmission speeds up to 10G or 25G, transceiver vendors simply increased the speed of their devices. When 40G and 100G standards were developed, transmission schemes that used parallel fibers were introduced. This increase in fiber count provided a simple solution to limitations of the technology available at the time. It was accepted in the industry and allowed multimode links to maintain a low cost advantage. However, the fiber count increase was not without issues. At some point, simply increasing the number of fibers for each new speed became unreasonable, in part because the cable management of parallel fiber solutions, combined with the increasing number of links in a data center, becomes very challenging. Please see the picture below. Usually, 40G is implemented using eight of the twelve fibers in an MPO connector. Four of these eight fibers are used to transmit while the other four are used to receive. Each Tx/Rx pair is operating at 10G. But if we use WDMMF, two fibers are enough. Each Tx/Rx pair can transmit 40G by simultaneously transmitting four different wavelengths. This enables at least a four-fold reduction in the number of fibers for a given data rate, which provides a cost-effective cabling solution for data center.

Parallel fibers vs WBMMF

WBMMF is born at the right moment to meet the challenges associated with escalating data rates and the ongoing need to build cost-effective infrastructure. Besides, WBMMF will support existing OM4 applications to the same link distance. Optimized to support wavelengths in the 850 nm to 950 nm range to take advantage of SWDM, WBMMF ensures not only more efficient support for future applications to useful distances, but also complete compatibility with legacy applications, making it an ideal universal medium that supports not only the applications of the present, but also those of the future.

Original article source: http://www.fs.com/blog/wbmmf-next-generation-duplex-multimode-fiber-in-the-data-center.html

The Era of Fusion Splicing Is Coming

Fusion splicingAs fiber deployment has become mainstream, splicing has naturally crossed from the outside plant (OSP) world into the enterprise and even the data center environment. Fusion splicing involves the use of localized heat to melt together or fuse the ends of two optical fibers. The preparation process involves removing the protective coating from each fiber, precise cleaving, and inspection of the fiber end-faces. Fusion splicing has been around for several decades, and it’s a trusted method for permanently fusing together the ends of two optical fibers to realize a specific length or to repair a broken fiber link. However, due to the high costs of fusion splicers, it has not been actively used by many people. But these years some improvements in optical technology have been changing this status. Besides, the continued demand for increased bandwidth also spread the application of fusion splicing.

New Price of Fusion Splicers
Fusion splicers costs have been one of the biggest obstacles to a broad adoption of fusion splicing. In recent years, significant decreases in splicer prices has accelerated the popularity of fusion splicing. Today’s fusion splicers range in cost from $7,000 to $40,000. The highest-priced units are designed for specialty optical fibers, such as polarization-maintaining fibers used in the production of high-end non-electrical sensors. The lower-end fusion splicers, in the $7,000 to $10,000 range, are primarily single-fiber fixed V-groove type devices. The popular core alignment splicers range between $17,000 and $19,000, well below the $30,000 price of 20 years ago. The prices have dropped dramatically due to more efficient manufacturing, and volume is up because fiber is no longer a voodoo science and more people are working in that arena. Recently, more and more fiber being deployed closer to the customer premise with higher splice-loss budgets, which results in a greater participation of customers who are purchasing lower-end splicers to accomplish their jobs.

More Cost-effective Cable Solutions
The first and primary use of splicing in the telecommunications industry is to link fibers together in underground or aerial outside-plant fiber installations. It used to be very common to do fusion splicing at the building entrance to transition from outdoor-rated to indoor-rated cable, because the NEC (National Electrical Code) specifies that outdoor-rated cable can only come 50 feet into a building due to its flame rating. The advent of plenum-rated indoor/outdoor cable has driven that transition splicing to a minimum. But that’s not to say that fusion splicing in the premise isn’t going on.

Longer distances in the outside plant could mean that sticking with standard outdoor-rated cable and fusion splicing at the building entrance could be the more economical choice. If it’s a short run between building A and B, it makes sense to use newer indoor/outdoor cable and come right into the crossconnect. However, because indoor/outdoor cables are generally more expensive, if it’s a longer run with lower fiber counts between buildings, it could ultimately be cheaper to buy outdoor-rated cable and fusion splice to transition to indoor-rated cable, even with the additional cost of splice materials and housing.

As fiber to the home (FTTH) applications continue to grow around the globe, it is another situation that may call for fusion splicing. If you want to achieve longer distance in a FTTH application, you have to either fusion splice or do an interconnect. However, an interconnect can introduce 0.75dB of loss while the fusion splice is typically less than 0.02dB. Therefore, the easiest way to minimize the amount of loss on a FTTH circuit is to bring the individual fibers from each workstation back to the closet and then splice to a higher-fiber-count cable. This approach also enables centralizing electronics for more efficient port utilization. In FTTH applications, fusion splicing is now being used to install connectors for customer drop cables using new splice-on connector technology and drop cable fusion splicer.

FTTH drop cable fusion splicer

A Popular Option for Data Centers
A significant increase in the number of applications supported by data centers has resulted in more cables and connections than ever, making available space a foremost concern. As a result, higher-density solutions like MTP/MPO connectors and multi-fiber cables that take up less pathway space than running individual duplex cables become more popular.

Since few manufacturers offer field-installable MTP/MPO connectors, many data center managers are selecting either multi-fiber trunk cables with MTP/MPOs factory-terminated on each end, or fusion splicing to pre-terminated MTP/MPO or multi-fiber LC pigtails. When you select trunk cables with connectors on each end, data center managers often specify lengths a little bit longer because they can’t always predict exact distances between equipment and they don’t want to be short. However, they then have to deal with excess slack. When there are thousands of connections, that slack can create a lot of congestion and limit proper air flow and cooling. One alternative is to purchase a multi-fiber pigtail and then splice to a multi-fiber cable.

Inside the data center and in the enterprise LAN, 12-fiber MPO connectors provide a convenient method to support higher 40G and 100G bandwidth. Instead of fusing one fiber at a time, another type of fusion splicing which is called ribbon/mass fusion splicing is used. Ribbon/mass fusion splicing can fuse up to all 12 fibers in one ribbon at once, which offers the opportunity to significantly reduce termination labor by up to 75% with only a modest increase in tooling cost. Many of today’s cables with high fiber count involve subunits of 12 fibers each that can be quickly ribbonized. Splicing those fibers individually is very time consuming, however, ribbon/mass fusion splicers splice entire ribbons simultaneously. Ribbon/mass fusion splicer technology has been around for decades and now is available in handheld models.

Ribbon/Mass Fusion Splicer

Fusion splicing provides permanent low-loss connections that are performed quickly and easily, which are definite advantages over competing technologies. In addition, current fusion splicers are designed to provide enhanced features and high-quality performance, and be very affordable at the same time. Fiberstore provides various types and uses of fusion splicers with high quality and low price. For more information, please feel free to contact us at sales@fs.com.

Original article source: http://www.fs.com/blog/the-era-of-fusion-splicing-is-coming.html

Three Ways Fiber Optic Transceivers Promote Data Center

The data center is one of the most critical and dynamic operations in any business. As companies produce, collect, analyze and store more data, IT infrastructures need to grow as well to keep up with the demand. With all the data processing and transmission, it is only critical that every design aspect and component of your data center is properly optimized, including its fiber optic transceiver technology. The transceiver and other optical components need to meet the bandwidth requirement for storage, switch, and server applications. Now let’s see how will the optical transceivers promote data centers in the future.Fiber Optic TransceiversSmall Package Makes Sense
Optical transceivers are becoming smaller, but more powerful, which makes them an important piece in server technology. In fact, even though a transceiver is physically small, it can handle a network expansion or an entire install. This shrinking of transceivers allows for the improvement of servers. This reduces the overall footprint of servers and networks, which makes data centers smaller and streamlined. Optical transceivers also require lower power consumption, which means you get lower costs both in terms of design and electricity expenses.

Data Center Makes up Big Transceiver Market
Optical components are always being improved, which can only mean good things for data center managers. According to recent numbers, 2016 and beyond will be huge for the data center market and optical components as more companies require efficiency in their networks. Data centers make up 65% of the overall 10G/40G/100G optical transceiver market. Shipment of 10G transceivers continue to grow, but still has plenty of room to grow, especially as industry experts expect the Datacom optical transceiver market to reach $optical transceivers2.1bn by 2019.

40G and 100G Transceivers Pave the Way
Consumers and technology experts can expect optical transceivers to improve as data centers grow and the cloud industry expands. Manufacturers have introduced transceivers that can transmit data at 40Gbps and 100Gbps, while some startups are investing millions in developing technology that can achieve higher speeds. These and other improvements can only mean good things for businesses and consumers.

Significantly improving your company’s IT infrastructure is becoming an essential task, especially in this data-driven world. Optical transceivers and components are some of the little things that definitely can make a big difference in this effort. Fiberstore provide a variety of fiber optic transceivers with high quality and low price, from 1000Base SFP to 10G SFP+, 40G QSFP+ and 100G CFP. For more information, please visit www.fs.com.

Unique Advantages of 10GBASE-T in Migrating Data Center to 10GbE

Over the last decade, large enterprises have been migrating data center infrastructures from 100MB Ethernet to 1/10 Gigabit Ethernet (GbE) to support high-bandwidth, mission critical applications. However, many mid-market companies found themselves restricted from this migration to 10GbE technology due to cost, low port density and high power consumption. For many of these companies, the explosive growth of technologies, data and applications is severely taxing existing 1GbE infrastructures and affecting performance. So it’s high time for them to upgrade the data center to 10GbE. With many 10GbE interfaces options such as CX4, SFP+ Fiber, SFP+ Direct Attach Copper (DAC), and 10GBASE-T offered, which one is the best? In fact, the answer is 10GBASE-T.

SFP+ , SFP+ Direct Attach Copper (DAC), and 10GBASE-T

Shortcomings of SFP+ in 10GbE Data Center Cabling
SFP+ has been adopted on Ethernet adapters and switches and supports both copper and fiber optic cables makes it a better solution than CX4, which is the mainstream 10GbE adoption today. However, SFP+ is not backward-compatible with the twisted-pair 1GbE broadly deployed throughout the data center. SFP+ connectors and their cabling were not compatible with the RJ-45 connectors used on 1GbE networks. Enterprise customers cannot just start adding SFP+ 10GbE to an existing RJ-45 1GbE infrastructure. New switches and new cables are required, which is a big chunk of change.


Advantages of 10GBASE-T in 10GbE Data Center Cabling
10GBASE-T is backward-compatible with 1000BASE-T, it can be deployed in existing 1GbE switch infrastructures in the data centers that are cabled with CAT6, CAT6A or above cabling. As we know, 1GbE is still widely used in data center. 10GBASE-T is backwards compatible with 1GbE and thus will become the perfect choice for gradual transitioning from 1GbE deployment to 10GbE. Additional advantages include:

  • Reach
    Like all BASE-T implementations, 10GBASE-T works for lengths up to 100 meters giving IT managers a far-greater level of flexibility in connecting devices in the data center. With flexibility in reach, 10GBASE-T can accommodate either top of the rack, middle of row, or end of the row network topologies. This gives IT managers the most flexibility in server placement since it will work with existing structured cabling systems.
  • Power
    The challenge with 10GBASE-T is that even single-chip 10GBASE-T adapters consume a watt or two more than the SFP+ alternatives. More power consumption is not a good thing in the data center. However, the expected incremental costs in power over the life of a typical data center are far less than the amount of money saved from reduced cabling costs. Besides, with process improvements, chips improved from one generation to the next. The power and cost of the latest 10GBASE-T PHYs will be reduced greatly than before.
  • Reliability
    Another challenge with 10GBASE-T is whether it could deliver the reliability and low bit-error rate of SFP+. This skepticism can also be expressed as whether the high demands of FCoE could be met with 10GBASE-T. In fact, Cisco has announced that it had successfully qualified FCoE over 10GBASE-T and is supporting it on its newer switches that support 10GBASE-T in 2013.
  • Latency
    Depending on packet size, latency for 1000BASE-T ranges from sub-microsecond to over 12 microseconds. 10GBASE-T ranges from just over 2 microseconds to less than 4 microseconds, a much narrower latency range. For Ethernet packet sizes of 512B or larger, 10GBASE-T’s overall throughout offers an advantage over 1000BASE-T. Latency for 10GBASE-T is more than 3 times lower than 1000BASE-T at larger packet sizes. Only the most latent sensitive applications such as HPC or high frequency trading systems would notice any latency.
  • Cost
    When it comes to capital costs, copper cables offer great savings. Typically, passive copper cables are two to five times less expensive for comparable lengths of fiber. In a 1,000-node cluster, with hundreds of required cables, that can translate into the hundreds of thousands of dollars. Extending that into even larger data centers, the savings can reach into the millions. Besides, copper cables do not consume power and because their thermal design requires less cooling, there are extensive savings on operating expenditures within the data center. Hundreds of kilowatts can be saved by using copper cables versus fiber.

The 10GbE standards are mature, reliable and well understood. 10GBASE-T breaks through important cost and cable installation barriers in 10GbE deployment as well as offering investment protection via backwards compatibility with 1GbE networks. Deployment of 10GBASE-T will simplify the networking transition by providing an easier path to migrate to 10GbE infrastructure in support of higher bandwidth needed for virtualized servers. In the future, 10GBASE-T will be the best option for 10GbE data center cabling!

Optical Fiber Benefits the Green Data Center Building

Green DataCenterWith the amount of energy now required to power the world’s data centers, one of the greatest challenges in today’s data centers is minimizing costs associated with power consumption and cooling, which is also the requirement of building the green data center. Higher power consumption means increased energy costs and greater need for heat dissipation. This requires more cooling, which adds even more cost. Under these circumstances, high-speed optical fiber offers a big advantage over copper to reduce the network operational and cooling energy.

What Is Green Data Center?
The word “green” invokes natural images of deep forests, sprawling oak trees and financial images of dollar bills. The topic of green has been gaining momentum across international, commercial and industrial segments as global warming and greenhouse gas effects hit headlines. In terms of different fields, the word “green” has different definitions. Specific to the data center segment of the telecommunications industry, green data center is a repository for the storage, management, and dissemination of data in which the mechanical, lighting, electrical and computer systems are designed for maximum energy efficiency and minimum environmental impact.

green data enter

How to Build Green Data Center?
Green data center address two issues which plague the average data center. One is the power required to run the actual equipment, the other is the power required to cool the equipment. Reduced the power required will effectively lessen not only the energy consumption but also the impact on environment. Green solutions include:

  • More efficient hardware components and software systems
  • Innovative cooling systems
  • Using natural ways to cool equipment
  • Building near advantageous natural resources or environments
  • Effective server and rack management for better air-flow

How Does Optical Fiber Benefit the Green Data Center Building?
Compared to copper cable, optical fiber may offer many advantages in contribution to building green data center. Usually, optical fiber connectivity can enhance green data center installations by utilizing high-port-density electronics with very low power and cooling requirements. Additionally, an optical network provides premier pathway and space performance in racks, cabinets and trays to support high cooling efficiency when compared to copper connectivity. All these advantages can be summarized as the following three points.

Lower Operational Power Consumption
Optical transceiver requires less power to operate compared to copper transceiver. Copper requires significant analog and digital signal processing for transmission that consumes significantly higher energy when compared to optical media. A 10G BASE-T transceiver in a copper system uses about 6 watts of power. A comparable 10G BASE-SR optical transceiver uses less than 1 watt to transmit the same signal. The result is that each optical connection saves about 5 watts of power. Data centers vary in size, but if we assume 10,000 connections at 5 watts each, that’s 50 kW less power—a significant savings opportunity thanks to less power-hungry optical technology.

Less Cooling Power Consumption
Optical system requires far fewer switches and line cards for equivalent bandwidth when compared to a copper card. Fewer switches and line cards translate into less energy consumption for electronics and cooling. One optical 48-port line card equals three copper 16-port line cards (as shown in the following picture). A typical eight-line card chassis switch would have 384 optical ports compared to 128 copper ports. This translates into a 3:1 port advantage for optical. It would take three copper chassis switches to have equivalent bandwidth to one optical chassis switch. The more copper chassis switches results in more network and cooling power consumption.

Line card port density in a 10G optical system vs. copper system

More Effective Management for Better Air-flow
Usually, a 0.7-inch diameter optical cable would contain 216 fibers to support 108 10G optical circuits, while 108 copper cables would have a 5.0-inch bundle diameter. The larger CAT 6A outer diameter impacts conduit size and fill ratio as well as cable management due to the increased bend radius. Copper cable congestion in pathways increases the potential for damage to electronics due to air cooling damming effects and interferes with the ability of ventilation systems to remove dust and dirt. Optical cable offers better system density and cable management and minimizes airflow obstructions in the rack and cabinet for better cooling efficiency. See the picture below: the left is a copper cabling system and the right is an optical cabling system.

copper cabling system vs optical cabling system

Data center electrical energy consumption is projected to significantly increase in the next five years. Solutions to mitigate energy requirements, to reduce power consumption and to support environmental initiatives are being widely adopted. Optical connectivity supports the growing focus on a green data center philosophy. Optical cable fibers provide bandwidth capabilities that support legacy and future-data-rate applications. Optical fiber connectivity provides the reduction in power consumption (electronic and cooling) and optimized pathway space utilization necessary to support the movement to greener data centers.

For more information about fiber optics and data center, please visit our twitter page: https://twitter.com/Fiberstore