Tuesday, April 19, 2016

Business continuity and Disaster Recovery Planning

Business continuity and Disaster recover are amongst the most unpleasant task of the business planning. They also offer some of the highest paybacks. Disaster planning is often neglected by the companies. It provides enormous business value and protects our assets. Too often when asked how they prepare for an emergency, companies will say that they backup their data every day. But, this is not enough. For example, it’s not uncommon for companies to back up their system and lock the backup tapes inside of fire retardant system. Fires are among the most common catastrophic disasters to affect businesses. But although these systems are fire proof, they won’t stop the backup tape from melting. In order to be effective, backups must be taken to an offsite location. Few statistics reveal that, 80% of the companies that suffer critical data loss will close their door within two years.

In addition to data protection, there are number of legal public relations, organizational and safety consideration which must be taken into account. In the chaos of an emergency, the broken window effect might come into play. Normally ethical employees might see an absence of authority and leadership as an opportunity for a fraud or theft. Every department in the company must contribute to the disaster plan. Disaster planning is a business issue, not an IT issue. IT department have the greatest insight into company-wide business process, so IT should be tasked with the disaster planning. But despite this a plan must be developed and implemented with top down support across all organizational departments. Without this insight and cross departmental participation, it is impossible to put together the proper plan.

The Disaster recovery plan stipulates, how a company will prepare for a disaster?  What the company response will be in the event of the disaster? And, What steps will it take to ensure that the operations will be restored? This plan must include many possible scenarios, since the causes of disaster can vary greatly. These can include things such as deliberate criminal activity, natural disaster such as fire, a stolen laptop, power outages, a terrorist attack, etc... There are hundreds of possible disaster scenarios and they vary based on cultural, geography and industry. It is also important that the disaster recovery plan be distributed across the organization so that everyone know the role within the plan.

The business continuity plan is a fairly new methodology that stipulates what steps a company must take to minimize the effects of service interruption. Back when companies were primarily paper-driven and information processing was done using batch processing, companies could tolerate a few days of downtime. But as technology became faster and cheaper, companies began computerizing more of their critical business activities. Companies needed to have systems in place that would minimize the impact of unplanned downtime. The first major event to demonstrate the importance of business continuity planning was the Y2K crisis. Since then, it’s been a standard function of corporate IT planning. One typical example of business continuity would be the electric generators used by hospitals to ensure that patients can still be cared for in the event of a power outage.

Reference: https://www.youtube.com/watch?v=qfjWhAmWYL8

Saturday, April 16, 2016

ISO 27001

International Standard for Organization (ISO) 27001 is a management framework for protection of business – critical information. According to ISO 27001, Information security is defined as the preservation of Confidentiality, Integrity and Availability of information; in addition, other properties such as authenticity, accountability, non-repudiation and reliability can also be involved. Confidentiality means, only the authorized person can access certain information. Integrity means, only the authorized person can change and add the information in a specified way. Availability means, the information has to be available to all the person who need them in the specified time.

Information Security Management System (ISMS) is a systematic approach to managing confidential or sensitive corporate information so that it remains secure. ISO 27001 is an ISMS standard that replaced BS77799-2:2002 in late 2005. It formally specifies an ISMS that is intended to bring information security under explicit management control. Also it is the best practice specification that helps businesses and organizations throughout the world. It adapts Plan-Do-Check-Act (PDCA) model.

Why should organization care about ISO 27001?

Reason 1: Compliance. ISO 27001 can bring in the methodology that enables organizations to comply in the most efficient way. Certification is often the quickest ‘Return on Investment’ – if an organization must comply to various regulations regarding data protection, privacy and IT governance (particularly it a financial, health or government organization).

Reason 2: Marketing Edge.  In a market which is more and more competitive, it is sometimes very difficult to find something that will differentiate you in the eyes of your customers. ISO 27001 could be indeed a unique selling point, especially if you handle client sensitive information.

Reason 3: Lowering the expenses. Information security is usually considered as a cost with no obvious financial gain. However, there is financial gain if you lower your expenses caused by incidents. You probably do have interruption in service, or occasional data leakage, or disappointed employees or disappointed former employees.

Reason 4: Putting your business in order. ISO 27001 is particularly good in sorting out those thorny management system issues -  it forces you to define very precisely both the responsibilities and duties, and therefore strengthen your internal organization.

Reference: https://www.youtube.com/watch?v=eN5MtSq89Hs

Tuesday, April 12, 2016

Data Center Construction Costs

When considering the expenses of building and the ongoing management of a data center, you can usually bank on about sixty to eighty percent of your investment going to:
  • Telecommunications cabling and systems
  • Ventilation and cooling systems
  • Electrical cabling and related equipment
  • Electronic security systems
There are some critical data center construction costs which take a back seat in the minds of many data center executives during the design and planning phases leading up to breaking ground for construction.

Here are six significant capital costs you should consider when preparing your data center business plan.

  1. Structural Elements: Just like the human body needs to have air, blood circulation, and a nervous system, it wouldn’t function without skeletal structure. The overall weight of servers, racks, cooling ducts, and cabling in a data center needs strong “bones” to support the load, with minimal impact on available space. The raised flooring, walls, and high ceilings of well-designed facilities need to be built to withstand earthquakes and extreme weather such as hurricanes or tornadoes. Using columns, beams, and other framing materials which don’t just meet, but exceed standards will protect your investment, and possibly reduce insurance costs or provide you with opportunities to win the trust of prospective clients.
  2. Office Space for Clients Working Onsite: Your clients will often need to set up a temporary work space while getting their gear installed and tested. Providing conference rooms or desk space for development and testing and other amenities for clients when they come onsite is often forgotten, but an important value add. These facilities can also serve your own needs, when hosting data center tours, interviewing personnel, and having planning meetings for on-boarding new customers.
  3. Modular, Adaptable Racking: Server hardware refreshes, upgrades, and expansions can occur frequently in a successful data center. Installing server racks and surrounding walls which can adapt to changing client needs can be another value-added service to differentiate your data center from your competition. Scalability to provide higher tiers of service, ranging from co location to managed, and fully managed Network Operating Center (NOC) monitored services, requires a facility which can be configured in multiple different ways. Racks which can be expanded and clustered, to adapt to changing capacity requirements is important.
  4. A Strong Foundation: Just like structural elements, the concrete foundation which supports a data center is vital. For purposes of load bearing, lessening the impact of earthquakes, and providing opportunities for raised floors for cabling are elements of construction which should be considered early in the design process.
  5. Fire Detection and Suppression Equipment: With all the electrical systems which power a data center and the backup systems, should primary systems fail, wet and dry fire suppression equipment needs to be widely available. Smoke and fire detection systems need to alert both onsite staff as well as local first responders to prevent extensive damage.
  6. Site Logistics Costs: Where a data center is located relative to local airports, shipping routes, telecommunications infrastructure, and power lines are all considerations outside of a data center proper. Should you need to arrange for significant new cabling and the excavation costs to go with it, you could need to significantly adjust your construction budget.

Tuesday, April 5, 2016

Data Center Virtualization and Standardization

The demand for IT services continuous to rise, while IT budget remains flat. This makes it increasingly difficult to manage the growth especially with the complex IT infrastructure. Maintenance and Management increases the total cost of ownership, because the applications cannot scale easily when built on isolated resources. Data Center standardization, consolidation and virtualization is the key to breaking down the capital expenditure, eliminating inefficiencies and meeting increasing growth demands. At every level of the data center from compute, to storage and to networking there is need for rapidly provisioned on demand capacity that is reliable, highly available and highly scalable so that IT can deliver virtualization economics across the entire data center.

            Server virtualization allows physical servers to be partitioned into multiple virtual servers. Each virtual servers runs its own operating system and applications. Server virtualization facilitates management, improves scalability and reduces capital expense by reducing the number of physical servers in the data center. Virtual server growth has led to an increase storage and network demands. To maintain, we have to use the proven principles of abstraction, pooling and automation can be applied to standardize the storage and network layers. With software defined storage, physical storage is decoupled from virtual workloads. Storage resources are then abstracted to enable pooling, replication and on demand distribution for higher availability. The result is a storage layer which is standardized, aggregated, flexible, efficient and scalable. With software defined networking, the logical network is decoupled from the physical network topology. This allows IT to treat the physical network as a pool of transport capacity that can be consumed and refurbished on demand. As we move into the mobile cloud era, same tools and process use to virtualize and consolidate you’re on premise data center can be used to facilitate your move to the hybrid cloud. In the hybrid cloud architecture services from multiple heterogeneous provider can seamlessly be managed as a part of single virtual cloud.
            
             Managing future growth while reducing cost and complexity is no longer impossible. Data center virtualization and consolidation helps your IT team reduce capital expenditure and eliminate inefficiencies on route to meeting increasing growth demands and delivering virtualization economics across the data center. And ultimately expanding to the hybrid cloud will lead to new services and business innovation.

Wednesday, March 30, 2016

Data Center Security


Our first key challenge is risk management which can be addressed with the layered physical security approach. Thereat to the data center can be of many forms, like third part contractors or employees who may have access to inflict unintended or intended damage. Deploying a layered security strategy can provide you feasibility to deter, detect or detain at every layer of your data center security producing the risk of breach. There are six layers of security. They are,


  • LAYER 1 – Perimeter Defense: The site perimeter is not just the border; it is the first layer of data center protection. Measure used to fortify perimeter security include Video Surveillance, fence, limited entry points with access control, physical security barriers such as anti-ram fencing gates and guard station with security personnel these are all decided to deter the intruders. Car trap and security personnel can delay the intruders.
  • LAYER 2 – Clear Zone: The second layer of the security addresses the space between the perimeter and the building exterior. These area is monitored by intrusion detection sensors and video surveillance to identify breaches.
  • LAYER 3 – Facility Facade/Reception Area: The third layer is the highest level of perimeter security. We have the opportunity to prevent the unauthorized access into the facility.
  • LAYER 4 – Hallway/Escorted Area/ Gray Space: The fourth layer of the security, validates access rights of authorized individuals into specific environments such as the data hall, network operation center, power and cooling facility areas.
  • LAYER 5 – Data Center Room: As you enter the data hall the fifth layer of security is the selective profile of authorized staff, contractors and visitors.
  • LAYER 6 – Data Center Cabinet: The sixth layer of security provides the controlled access and accountability directly at the equipment location. The interoperability of these six layers mitigates your risk of an effective and efficient protection of the facilities critical data.

Attacks can also come outside in. And today the most popular attacks are the ones that target web applications. Hackers know that the web apps are full of vulnerabilities and can lead to very profitable exploitation. And another popular data center attacks strategy is Distributed Denial of Service (DDoS), where the attacker generates massive amounts of traffic to overwhelm and paralyze your systems. Also another common attack is AppDos attacks which targets specific application. These types of attacks can be prevented by the effective use of firewalls. Also there are different use case for firewall technology. In campus branch the next gen firewall will be deployed. Intrusion Prevention System (IPS) which relies on repeating and other intelligent data source, to provide additional defense. And there is an Application visibility control where we can see and control the internet apps and content the employees are accessing. And finally, there is an active directory integration where the identities can be managed and controlled.

Tuesday, March 22, 2016

Power Usage Effectiveness (PUE)

PUE is an acronym from Power Usage Effectiveness. It is the measurement of the energy efficiency of data centers physical infrastructure such as the power and cooling equipment. PUE is not a measure of how efficient the IT equipment is, rather it is the metric to quantify the overhead power that is consumed in supporting the IT equipment. According to the recent study, USA data center energy consumption is 2% of total USA energy consumption. This is equivalent to the energy consumption of 7 million households.

The formula to calculate PUE is, take all the energy or power that is used to operate the data center and divided by the amount of energy consumed by the IT equipment’s like servers, network switches and storage devices.


For example, let’s consider this 2N redundant data center. 47 percent of the electrical power entering the facility actually powers the IT load and the rest is consumed or converted to heat by the power, cooling and lighting equipment. This include devices like UPS, transformers, generators, chillers, pumps, fans, etc.… Let’s consider the total data center power consumed is 1000 kw and 470 kw is by the IT load. So the PUE of this data center is 2.13.



The theoretical best PUE that can be achieved is 1. That is, every watt consumed by the data center is consumed directly by the IT equipment. If the PUE is 3 or more, then the data center is considered as in efficient. According to the Uptime Institute Data center survey, the average PUE is between 1.8 and 1.9.

Few ways to lower the PUE level.
  • First step is to know the PUE of your data center. If it is not determined, then have the energy assessment performed by the data center specialist. They can also provide the specific recommended improvements that often pay the cost of the assessment within a year. In many data center, the cooling systems uses more power than the IT equipment’s. So the improvements to cooling will generally have the biggest impact on the PUE in overall energy saving.
  • Keep hot air and cold air from mixing. Since this mixing makes the cooling system very inefficient. So make use of containment solution like hot aisle or cold aisle containment or vertical docks which are very effective in separating the hot or cold air streams.
  • Raise the temperature set point in the data center. The new ASHRAE guideline recommend rack temperature can be as high as 80°F or 27°C.
  • Finally, calculate and manage PUE on a constant basis. This can be done by installing the meters and monitoring software.
     Reference: https://www.youtube.com/watch?v=BiglstCxGDI

Wednesday, March 16, 2016

Selecting a Rack PDU

In this blog, we shall discuss about the configuration option and what rack PDUs are best for your data center. While deploying PDUs we have to consider the following things, What kind of power do you have? How much power do you need? How much power do you draw? What plug types do you have? How much room do you need for the future? What do you need for the future? Will you add more devices to the rack? Will you need more power in the future? In many companies some of these answers to these question will come from Facilities group, while the other answers come from IT group.

In order to calculate the power which is being used by our server and storage devices, we can add the AMP drawn of all the equipment’s that has been plugged to the PDUs. The amp drawn information can also be gathered by several ways. The equipment manufacturer provides the ‘Name Plate’ or ‘Face Plate’ power ratings. These power ratings are often calculated for the worst case scenarios. Most manufacture offer power sizing tool or capacity planning tool to calculate the power used. Intelligent rack PDUs that monitors power consumption of the server can be a valuable source for calculating the amps drawn by new servers. Power monitoring can be done on whole PDU level, individual outlet or groups of outlet.
      
     Few rack PDUs allow remote power management for monitoring the usage of power. Power outlet cycling is ideal for data center without 24 hours’ staff coverage or devices deployed in remote locations. The ability to schedule power off in an outlet allows to easily enforce IT power policies such as switching off all non-production servers after 6 PM.

     Some intelligent rack PDUs also perform environmental monitoring. With temperature and humidity monitoring, we can identify hot and cold spots in data center or within the rack. By identifying the cold spots where over cooling is taking place, we can increase the temperature on our Computer Room Air Conditioner (CRAC) units. If the space is available in the rack, then we can also add additional servers to that rack. By locating the hot zones in our data center, we can identify the cooling needs that prevent the downtime and damage to our equipment. We can receive alerts by having sensors all over the colocation floor.
     
     Over Cooling and Over Provisioning of the data center will lead to the increase in the operational cost and its harmful to the environment. With rise in cooling and power cost, the ability to monitor and control your power usage helps to promote a cost effective and greener data center. 

Wednesday, March 9, 2016

Rack Power – Power Distribution Units (PDU)

Today’s Data center are filled with power and storage devices with increase in power needs. We have to select the best rack Power Distribution Units (PDU) for that environment. In this blog, I will share some basic power terms and few details about rack PDU

  • Ampere (Amp): It measures the amount of electrical current flowing through a circuit during a specific time period. It is also known as Amps. 
  • Volt (V): It is equal to the difference of electric potential between two points on a conducting wire. 
  • Volt-Amps (VA): It is the voltage multiplied by amps (Voltage * Amps). This rating is the apparent power, which represents the maximum power that a device can draw. Kilovolt-Amps (KVA) is the measure of VA in thousands. i.e. 2000 VA = 2KVA. 
  • Watt (W): It is the measure of real power drawn by load equipment. It is used as the measurement of both power and heat generated by the equipment. 
  • Power Factor: It is the ratio of real power to the apparent power. In other words, it can also be described as the power that is being being supplied Vs the power that is consumed. Most modern IT equipment has the power factor of 1, which means that the equipment efficiently uses the power supply and factors less than 1 signifies the less efficient equipment. 
  • Circuit Breaker: It is a switch that protects electrical equipment from damage caused by overload or short circuit.


Data Center power is distributed with 208V single phase, high line power. Also it can be distributed with three phase power. Three phase power is used because of its efficiency and power delivering. The National Electrical Code (NEC) is the United States standard for safe insulation of electrical wire in an equipment. It states that, PDU cannot allow a continuous measure load that exceeds more than 80 % of the connector or cable rating. NEC defines the continuous load as 3 hours or longer. This is sometimes referred to as Derated Load, i.e. a 30 AMP rack PDU can only carry a maximum continuous load of 24 AMP. The NEC rated load on a rack PDU needs to be considered when data center operators want to provide power redundancy for their equipment. Without power redundancy, if the rack PDU fails the all equipment’s will shut down. To prevent this all important servers and infrastructure equipment should have multiple power supply and plugins to at least two different PDUs. Best practice is, never go above 50% of the PDU capacity. This is called PDU power balancing and provides power redundancy.  

In my next blog, I will share some details about how to select the rack PDU for your data center. Stay Tuned.......

Saturday, March 5, 2016

Data Center Cooling

In data center’s, we might think that most of the heat is being produced by the servers, but that’s not correct. Because large amount of heat is being produced by the communication equipment. The main purpose of the data center cooling technology is to provides the stable environmental conditions for the Information Technology Equipment’s (ITE). In this blog, we shall discuss more about the data center cooling and the equipment’s that are being used to cool the data center.
           Most of the data centers operates in a temperate range between 65°F to 75°F. To attain this temperate, we can use cooling devices like Computer Room Air Conditioner (CRAC) or Computer Room Air Handler (CRAH) or Chiller or Economizer. There are two different types of CRACs, namely External Chiller Plant and Compressorized Chiller. CRAC units pull hot air from the top of the room and push the conditioned air to the space below the raised floor. Using perforated tiles, cold air is pushed to the front of the server. We should also prevent the mixing of hot and cold air. This can be done by using the Hot Aisle containment system, where the hot exhaust air is contained and returned to the air handlers. The complete mechanism of hot aisle containment system is shown in Figure 1.

      
(Figure 1 - Hot Aisle Containment System)

                CRAH is a device which is used to deal with the heat that are produced by the IT equipment’s. It uses fans, cooling coils and water chiller system to remove the heat from the data center. Chillers normally remove the heat from one equipment and deposit it into another element. Without chillers the temperature would quickly rise which corrupts the mission critical data and destroy hardware equipment. All these devices consume large amounts of electricity and would require dedicated power supply. We can save these energy cost by using the Economizer. It can be used in areas where winters have very cold temperature, like less than 40°F. These economizer fetches the cold air from outside and circulate into the data center. Exhaust openings and fans can be used to remove the hot air from the data center. This indeed helps to cool the IT equipment and reduce the power load on the chillers.
          The American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) standard is the widely recognized air conditioning standard. According to ASHRAE, the average temperature of the data center should be between 65°F to 85 °F and the humidity should be maintained between 42°F DP to 59°F DP. Nowadays, most of the data centers are switching to computational fluid dynamics which helps in reducing the cooling cost. In my next blog, I will share other important aspects of the data center.

Reference: https://journal.uptimeinstitute.com/a-look-at-data-center-cooling-technologies/


Sunday, February 28, 2016

Few Guidelines to Improve the Data Center Efficiency

“Great design is not just a solution; it is the elimination of the problem”

In today’s technical world people come with more ideas, out of which many are being implemented. For every problem, there should be a solution. So in this blog, I will share few solutions which can be used to overcome the challenges that are being faced in the data center.
  • Due to the lack of trained employee’s, data centers face more issues on resource management area. So people need to be educated more on areas like data center operation and management. By doing so, we can also eliminate the dependency on a particular resource. There is a permanent solution which is to automate the tasks which are being done manually. This would significantly reduce the need for resources. 
  • Another major challenge that are being faced in data center is the lack of storage area. To overcome this, storage virtualization would be the effective solution as virtualization significantly increases storage capacity utilization. It can be used to,

o   Combat exponential data growth.
o   Increase the utilization of existing storage assets.
o   Reduce storage capital expenditure (capex).
o   Reduce Operating Costs (opex) and Total Cost of Ownership (TCO).
o   Simplify storage management.

(Virtualization)  
                                                                                                       
There are three key properties of virtualization. First is Partitioning, where we can run multiple operating system on one physical machine and fully utilize server resource. Second is Isolation, where faults and security are isolated at virtual-machine level and dynamically control CPU, memory, disk and network resources per virtual machine. And the third property of virtualization is Encapsulation, where we encapsulate the entire state of the virtual machine in hardware-independent files and re-use or transfer whole virtual machines with a simple file copy. 
  • High Energy consumption is the next major problem which is being faced by the data center managers. First, by decommissioning the unused servers, the list of unused server should be found and retired, which helps in decreasing the electricity consumption. According to the Up-time Institute, decommissioning a single 1U rack server can annually save $500 in energy, $500 in operating system licenses, and $1,500 in hardware maintenance costs. Secondly, it is the consolidation of lightly utilized servers by combining applications onto a single server and a single operating system instance. Also server clustering reduces the number of backup or standby servers needed by a system, which improves availability and uses hardware more efficiently.


Above are few steps that can be used to improve the efficiency of the data center.

In my next blog, I will share more insights on different aspects of data center. Stay Tuned...

Reference: Notes from Professor, William F. Slater, Data Center Architecture Course

Thursday, February 18, 2016

Challenges Faced in Data Center – Part II

Luis Gutierrez a famous American politician has once said that, “According to the Privacy Rights Center, up to 10 million Americans are Victim of ID theft each year. They have a right to be notified when their most sensitive health data is stolen.” This clearly shows that; the information’s are not stored securely. In this blog, I will continue with few more challenges that are being faced in the data center.


1.     High Energy consumption:

The current data centers are more energy efficient when compared to the earlier data centers which was built during the start of 21st century. Power consumption is the major deciding factor in determining the efficiency of data center. In order to reduce the power consumption, it must first be accurately measured. It should be measured in all the places, where the power is being consumed. It should include IT equipment, Power distribution infrastructure, Ventilation/Cooling equipment, Security equipment, Water treatment equipment, etc.  Due to more manual intervention in the tasks involved, many a times, these measurements are not being collected. This problem can be eliminated by deploying Data Center Infrastructure Management(DCIM) Tool which automatically extracts and displays the current energy usage. By doing so, they can concentrate on the areas where more power is being consumed. By implementing the DCIM tool, many data centers achieve in reducing the power consumption by 15 to 25 percent.

2.     Increasing Energy and Facilities Costs:

In the recent years, data managers are facing more problem in operating the data center due to increase in the operational cost. Approximately 40 percent of the overall cost is being spent on the employer’s salary. Next major part of the amount is being spent on power generation and IT equipment cooling. They can reduce the cost by eliminating the old systems and shift the workloads to the more efficient hardware. By doing so, they can reduce the number of server deployment by 5% to 20%. Some new features can be implemented which could be helpful in reducing the infrastructure cost. For example, Microsoft is experimenting to deploy their data center under the ocean. This could be helpful in minimizing the use of cooling equipment’s.

These are the major challenges that are being faced in the data center. However, there should be some optimal solution to overcome these problems. So in my next blog, I will share few solutions to overcome these challenges. Stay Tuned……………...

Thursday, February 11, 2016

Challenges Faced in Data Center - Part I


Hello Everyone…,

In my last two blogs, I have thrown some light on the physical structure of data centers. Also, the equipment’s that are used to build the data centers are covered in my earlier blogs. Going forward, in my future blogs, I will share few challenges that are being faced in the data centers.

1.     Resource Shortage:

In today’s world, as the technology grows, the amount of work being done by the people decreases. Everything is being automated which reduces the considerable amount of workload on the people. But it’s not the case in data centers, because more people are needed to maintain them. Also few tasks in data center requires the manual effort which wouldn’t be possible without qualified people. Few statistics reveal that, 38 percent of the data centers are understaffed and only 4 percent are overstaffed. So if we want to reduce the resource in data centers, we have to find alternate way to automate the tasks that are being performed routinely. In the coming years, there is going to be a high demand for the people who have great knowledge on the data centers. So get ready folks….......!!!!!!

2.     Lack of Storage Space:

Nowadays the data centers are facing major challenges in storing the data. As the business grows, the amount of data that is being collected from various sources also increases and these data needs to be stored and processed for future use. So the data centers should be built to withstand these large volumes of data or else they have to expand their storage capacity by replacing the hardware equipment or by extending the data center to build a new storage area. Going forward, the new data centers should find an optimal solution to accommodate these large amount of data.

3.     Data Security:

As the data center people are striving hard to protect the data, many hackers are as well working hard to hack the data or to inject malicious content. In order to maintain a strong relationship with clients, the data center people have to secure the data from such malicious attacks. They are facing these security problem because of the emergence of big data and advanced targeted attacks. This makes it more important to protect the data in transit, while stored on an active device and after the stored device have been retired. Most of the attacks are being done be the cyber criminals or by the authorized user. So, the data center people have to be more vigilant while providing access to the authorized users.

These are some of the challenges that are being faced in the data center. In my next blog, I will share few more challenges. Stay Tuned………….

Tuesday, February 2, 2016

Tulip Data Center

Hello Everyone…,
          Hope you gathered some information about Equinix data center from my previous blog. In this blog, I will share some information about one of the largest data center which is located in India. Nowadays, more people started using internet and at the same time huge volumes of data is being collected from the user. So the data center is the key to the business, which has to be large, secure and efficient. In Bengaluru, Tulip a telecommunication provider has built one of the Asia’s largest and third largest data center in the world. It’s a Tier III data center which has a built up space of 9 million square feet and more adequate space is available all around the building for creating sub stations, spaces for generators and diesel tanks. It is a seven-storey building with two base floor space for the utilities and offices. From second floor onwards, this data center has four towers with two halls each. Also this facility has the provision for 50 meeting rooms and up to 1500 seats for staff of customers.
 
The integrated management system ensures that every equipment in the data center is managed effectively, so that any problem in the equipment is addressed immediately. This data center is designed to have 66 KV power from two separate grids and 11KV fleets are used to transmit power to each of the 20 floor plates where they are converted to 450 volts. Overall 16 generators of 4MW capacity each, are available during short and long intervals of power outages. In addition to this, four more generators are kept on standby. Also three diesel tanks are available for fuel storage. The Tulip data center is designed for up to 14,000 racks. For each rack the power is supplied from 2 different UPS system with each of the UPS system having 15 minutes’ battery backup. The power redundancy of this Tulip data center is N+1.
For cooling the equipment’s, 39 air cooled water base chillers are installed on the roof of the building. In order to improve the efficiency of the cooling system, the area of the cold aisle is contained, so that the cold air does not mix with the hot air in rest of the room. Tulip data center is fully secured in such a way that; the data center is surrounded by a concrete wall of 10 feet high with further intelligent fence of 4 feet in height. This data center is fully monitored by 1500 cameras. Two separate gates are available for the movement of staff and the equipment. Lockers are also available to store the non-permitted items. Staff, visitors and clients are thoroughly checked using the metal detectors, hand held scanners and explosive sniffers. All bags are scanned through the X-ray machine. Firewall and the best security products ensures the highest level of security of the data in the data city. Tulip data center is also protected by the firefighting system which has the smoke and heat detection sensors installed on the ceilings.
From these two blogs, we might have gathered some information about physical elements that constitutes the data center. In the next blog, I will share some information about the problems that are faced in the data center. Stay Tuned………………….
 

Friday, January 29, 2016

Data Center Design Criteria

The uptime institute has established four levels of fault tolerance for data centers. The below tier classification has been an accepted standard for defining the levels of the data center fault tolerance. A data center is said to be concurrent maintainable, only if they are able to perform the planned maintenance activity without shutting down the critical loads.


TIER 1
Single path for power and cooling distribution; no redundant components – less than 28.8 hours downtime/year
TIER 2
Single path for power and cooling distribution; Redundant components – less than 22.0 hours downtime/year
TIER 3
Multiple power and cooling distribution paths, but only one path active; redundant components; concurrently maintainable – less than 1.6 hours of downtime/year
TIER 4
Multiple active power and cooling distribution paths; redundant components; fault tolerant – less than 0.4 hour of downtime/year
Source: The Uptime Institute

·        TIER 1 Data Center

A TIER 1 data center is a basic data center, which was first deployed in 1965. It is prone to disruptions from both planned and unplanned activity. This type or data center may or may not have a generator or UPS. In order to perform annual maintenance or repair work, the data center should be completely shut down.

·        TIER 2 Data Center

A TIER 2 data center has all the features of TIER 1 which was first deployed in 1970. It has redundant components and less prone to disruptions from both planned and unplanned activity. This data center has a raised floor height of 18 inches and has multiple servers, UPS and generators. The redundant component is “Need plus one” (N+1), which is a single threaded distribution. In this the applications are covered by some kind of Business continuity or Disaster Recovery plan.

·        TIER 3 Data Center

TIER 3 data center was first deployed in 1985, which incorporates all the features of TIER 2. In this tier, planned maintenance activity (Like repair and replacement of components, testing of components, programmable maintenance, etc.) can be performed without disrupting the computer hardware components. But the unplanned activities (Like failures in facility infrastructure components) will result in disruption. This type of data center has redundant power, cooling and networking systems. It is highly secured and can handle up to 72 hours of power outage.

·        TIER 4 Data Center


A TIER 4 data center has all the features of TIER 3 and is fault tolerant. This was first deployed in the year 1995. This type of data center has active distribution paths which is System + System configuration (i.e. it has two separate UPS system with N+1 redundancy in each system). In this tier, all the computer hardware requires dual power inputs. The rooms and zones are isolated and has the raised floor height of 30 to 36 inches. In order to achieve high availability, reliability and serviceability, this tier employs Clustering, Direct Access Storage Devices (DASD), 24/7 monitoring, Thermal storage and many more. Tier 4 data centers are the most expensive to build and maintain.

Sunday, January 24, 2016

Equinix Chicago Data Center

Hello Everyone...,
            In today's world, large amounts of data are being collected and processed. We all would be wondering where these data are being stored? These data are being stored in a physical storage area called Data Center. In this blog we shall discuss about a particular data center "Equinix Chicago Data Center". Equinix is the world’s largest International Business Exchange (IBX) data center and colocation provider. It contains 145 data centers in fifteen countries. In US they have more than two million square feet of space. The Chicago data center is 280,000 square feet of space which is located at Elk Grove village. This data center is a three-storey building with first floor containing the facility infrastructure and the second and third floor containing the colocation space.            
            Overall Equinix provide very good security which helps to keep the data secured. In Chicago data center, they have multiple layers and levels of security. Dozens of security cameras are placed inside the data center which helps to monitor the center effectively. In order to avoid unauthorized person to access the data center they have also used bio metric scanners. Also the lock doors would require unique identification code to access it.The floor of the facility infrastructure is made up of concrete slab and the floor of colocation space is made up of tiles, which is being cooled daily. The model of Chicago data center is called as upside down data center, which in turn consist of overhead cable trays, which has thousands of cross connects containing copper, fiber and coaxial cables.
 
 

(Inside the Equinix Chicago Data Center)

The temperature of colocation floor is maintained between 68°F to 72°F. The Chicago data center contains four air handler rooms; east and west side of the colocation room contains the air handlers. The warm air from the colocation room is sent to the filters, where it is being cooled. Using ducts, the cooled air is again taken back to the colocation room and then sent to equipment’s for cooling.

In almost all the data centers, most of the heat is being produced by the computer equipment’s. In Chicago data center, sensors are being placed throughout the floors which will monitor the cooling requirements in a particular area. When the sensors call for more cooling in a particular area, then the building management system increases the frequency of the fan and supply more cool air to the place where it is needed. Also they have eight 750 ton Trane chillers and on the roof they have 8 bottom air coil water cooling towers. When the outdoor air temperature is 45°F, the heat exchangers stops the mechanical cooling and uses the outdoor air to cool the colocation space.

The overall capacity of the facility is 30 MW. This data center contains 15 diesel generators, with 12 generators that are kept online and 3 more generators on standby. Also they have two swing generators, which can be used for maintenance purposes. This facility contains two 34.5 KV lines and the remaining of the facility operates on 480 V. Overall Equinix strives hard to satisfy their customers in all ways. Hope I have provided some insights on Equinix Chicago data center. Soon I will be sharing some other information about data center and its operation in my next blog.

Reference: https://www.youtube.com/watch?v=WBIl0curTxU