BGP / AS number / PI space
In most cases, a user will take a small allocation of IP address space from a single network-access provider from which to assign addresses to their equipment. In the case that they require a very large allocation of address space or need their address space to be portable, allowing them to be moved between network-access providers or concurrently accessible through multiple providers, they would usually request provider-independent address space and their own autonomous system number.
In order to qualify to receive PI space and an AS number, an organisation needs to justify a requirement for an address range of size 512 addresses or greater; that is to say, they must have a need to exceed the number of addresses available in the next-smallest available network size, this being 256 addresses. A successful application relies on substantiatable evidence of the address need, and with the current shortages of worldwide Internet address space, this is enforced quite stringently.
Once an organisation has an allocation of PI space and has an associated AS number, they will need to set up one or more routers running the Border Gateway Protocol, that announce their addresses to one or more network-access providers who will carry their traffic. The organisation's equipment will be behind this router and will be set up with addresses from the PI space allocation.
In a metro Ethernet environment, the access layer of the network most commonly uses virtual network tags to isolate individual networks. When a service provider uses this technology for their own purposes of segmenting individual customers, this usually prohibits those customers from using the same technology in any portion of their operations which come in contact with that service provider's network.
Using the so-called Q-in-Q approach, a service provider can tunnel a customer's virtual network tags within their own, thereby allowing that customer full use of the technology for their own purposes while not introducing significant additional complexity in the service provider's network.
IP address / IPv6 address
Each host which is directly accessible on the public Internet is given its own address that is globally unique. Collections of addresses are grouped together in to networks, which form the smallest parts of an addressing hierarchy containing progressively larger networks. At each level, routers on the Internet know what paths to take to reach other networks, and so through a series of hops, any host on the Internet should be able to reach any other host.
By the Internet Protocol standard, there is space for nearly some 4.3 billion addresses. By virtue of this limitation, as well as some of the imprudent decisions made in the early days of the Internet, this pool of addresses is fast nearing exhaustion. To address the limitations in this protocol, a successor to the protocol; Internet Protocol version 6, is in the process of being adopted across the Internet. The new protocol has space for over 340 undecillion addresses, making exhaustion very unlikely.
In fibre-optic communications, pulses of light are reflected through cables whose cores are formed of plastic or glass. Traditionally, the limiting factor to data throughput using such a method is how quickly the sending laser or LED and receiving photodetector can reliably toggle the light on and off and distinguish the pattern. At present, the commonly available optical signalling equipment for Ethernet networking can achieve a throughput of 10 gigabits per second.
One way of being able to send more than 10 gigabits per second across a fibre connection without needing to use optical signalling equipment with a greater throughput capability is to have multiple pairs of sending and receiving apparatus, each using a different wavelength of light. This technique is called wavelength division multiplexing, and employs devices at either end of a link which take multiple inputs at different light wavelengths and combine them down one fibre pair.
The routing protocol Open Shortest Path First is used within a network to allow routing devices to tell one another about the IP address ranges that are attached to them. By sharing this information among themselves, all routing devices within a network are able to know about all the IP addresses present within that network, and how best to get from one part of the network to another.
The way in which OSPF dynamically advertises information about routes within a network allows for traffic to take the most efficient route between locations during normal operation, and then to quickly adjust to another route should a connection within the network go offline. This allows the network to offer continued service in the case of a failure, without interruption, where diverse routes exist.
A server will typically expect to send and receive all traffic to and from the rest of the Internet via a single router address. Of course, using just a single router, no matter how resilient a device that may be, does expose a single point of failure in the design. One way to counter this is through the use of Hot Standby Router Protocol, a redundancy protocol by Cisco for the purpose of allowing multiple routers to present a single router address.
This mechanism operates through having each router, in addition to its own address, being able to present a shared address with which the server will communicate. During normal operation, one of the two routers will answer as the shared address while the other monitors it to ensure it remains operational. At such time as the other router notices the active router goes offline, it will assume operation of the shared address, thereby allowing continuation of service.
One of the early and most widely supported methods for uploading and downloading files over the Internet is the File Transfer Protocol. A large choice of both free and commercial FTP clients exist across almost all operating systems offering a multitude of different features.
Designed to allow hosts to locally mount filesystems from remote machines, the Network File System is mostly used in Unix-like environments. Typically, operating systems either come with NFS client support built in as standard, or at least the operating system vendor themselves provide such support as an add on. Although less common, NFS client software provided by other parties does exist. The feature set usually does not vary significantly between implementations.
Although many of the implementation details are quite dissimilar to FTP, the SSH File Transfer Protocol aims to achieve the same type of functionality, allowing for the uploading and downloading of files over the Internet. SFTP is an optional subsystem of the Secure Shell (SSH) protocol, and relies on SSH to provide the authentication, integrity and confidentiality mechanisms that are the notable advantages of this over and above FTP.
The SSH protocol includes support for common hashing algorithms such as MD5 and SHA1, as well as common encryption algorithms such as Triple DES and AES. Users of SFTP can rest assured that data transmitted using SFTP cannot be intercepted and deciphered using currently-known forms of cryptanalysis.
As with FTP, a multitude of SFTP clients exist for a variety of operating systems.
Following an open selection process, in the year 2000, the United States National Institute of Standards and Technology selected a successor to the aging Data Encryption Standard originally adopted in the year 1976; this became the Advanced Encryption Standard. This standard is the US government approved encryption cipher for top secret classified data since 2003.
Where a key length of 256 bits is used, practical brute force attacks against AES using processing capabilities publicly available at time of writing would require significantly longer to complete than the human race is likely to continue to exist! For all practical intents and purposes, the AES encryption algorithm can be considered to be unbreakable.
A four tier system that provides a simple and effective means for identifying different data centre site infrastructure design topologies. The Uptime Institute's tiered classification system is an industry standard approach to site infrastructure functionality addresses common benchmarking standard needs. The four tiers, as classified by The Uptime Institute include the following:
- Tier 1: Non-redundant capacity components (single uplink and servers). composed of a single path for power and cooling distribution, without redundant components, providing 99.671% availability.
- Tier II: Tier 1 + Redundant capacity components. composed of a single path for power and cooling distribution, with redundant components, providing 99.741% availability
- Tier III: Tier 1 + Tier 2 + Dual-powered equipments and multiple uplinks. composed of multiple active power and cooling distribution paths, but only one path active, has redundant components, and is concurrently maintainable, providing 99.982% availability
- Tier IV: Tier 1 + Tier 2 + Tier 3 + all components are fully fault-tolerant including uplinks, storage, chillers, HVAC systems, servers etc. Everything is dual-powered. composed of multiple active power and cooling distribution paths, has redundant components, and is fault tolerant, providing 99.995% availability.
Tier 1 transit
Unless requiring Internet transit in the scale of tens of gigabits per second, most organisations will be not be able to acquire connectivity directly from a tier 1 provider. Some tier 2 providers offer a pseudo-direct connection to a tier 1 provider from their network, using a portion of their own connection to the particular upstream provider for this.
This is a good solution for organisations that wish to manage their own connections to tier 1 networks, rather than electing to take a multihomed transit service from a tier 2 provider that will manage diverse and resilient connections to multiple tier 1 networks for them behind the scenes.
Physical space in racks (cabinets) and supply of power and cooling.
Unit of current for electricity supplied to colocation customers. Full racks usually provisioned as 8A, 16A or 32A. Half-rack, quarter-rack or 1U customers usually as part thereof.
A single server might use anything between 0.5A and 4A depending on processor, memory and disk specification, as well as workload.
Note that we charge for the supply of power in Amps based on delivery at 240 Volts in the datacentres. The energy ratings of equipment are typically specified in Watts, which is Volts×Amps.
Billing based on volume of data sent and received during a month, measured in bytes.
Billing based on average bandwidth utilised in an outbound and inbound direction during a month, measured in bits.
Units of bandwidth (in bits per second); Kilobits per second / Megabits per second / Gigabits per second.
A measurement of rack space; 1.75 inches / 44.45 millimetres. All rack mounted equipment has a height measured in integer multiples of 1U.
All our standard racks will accommodate equipment which is 19 inches / 480 millimetres in width, being the standard rack-mountable equipment width.
A full rack typically has space for 42U – 48U of equipment, depending on the manufacturer and model of the rack. Half-rack and quarter-rack enclosures are of course divisions of these full rack capacities.
A rack, or part thereof, will either have a single-feed supply or dual-feed supplies. This refers to whether or not there are multiple power distribution units coming in to the rack from separate power sources.
Where dual feeds are supplied, a customer will typically use servers and networking equipment which has multiple power supplies in a redundant configuration so that they can connect to both feeds concurrently.
This is a connection between a customer's premises and our datacenter network. Over a leased line, transit and interconnect services can be provided.
In simplified terms, a multihomed connection is one which is linked to the rest of the Internet through multiple service providers.
Whereas full transit assures the delivery of traffic to and from all Internet destinations, partial transit applies only to a subset of routes to a limited number of destinations.
In our case, partial transit simply includes all those routes available to us through settlement-free peering agreements.
Some examples of the peering exchanges we are a member of and have arranged settlement-free peering with many other members of include LINX, LIPEX and LONAP.
A connection between two locations which, for all intents and purposes, is private. Access to the Internet is not provided on these, they simply privately connect separate locations where our customers have a presence.
Our customers will typically be using such connections either to form part of a path between two locations, or to allow their collocated servers at multiple locations to talk to one another for the purpose of replication and such like, possibly as part of a disaster recovery plan.
The speed at which a physical port on one of our switches has the capability to send and receive data.
- Ethernet = 10Mbps
- Fast Ethernet (FE) = 100Mbps
- Gigabit Ethernet (GigE) = 1000Mbps (1Gbps)
Of course, in most cases, the committed data rate selected by the customer will be somewhat less than the speed achievable by the port.
Digital subscriber line. A cheap type of connection available to connect customer premises.
Providers typically offer bandwidths between 2Mbps and 24Mbps, but the connections are almost always contended, as in the provider sells much more bandwidth than they have the capability to deliver. Common contention ratios are 5:1, 10:1, 20:1 and 50:1, depending on the price.
Additionally, the service level agreements usually associated with DSLs are mostly very poor, again attributable to the price.
Most people requiring a high degree of reliability or performance will select a leased line in preference of DSL.
DISCUSS YOUR REQUIREMENTS
Connect with one of our infrastructure advisors today to find out how our services can help.
Alternatively, please call
+44 (0)8000 470 481