Broadband First-Mile Technologies
Revised August 21, 2016
Today, most residents in developed countries have access to multi-megabit Internet access that costs little more than dialup and a phone line did a couple of decades ago. The proliferation of smart phones is driving demand for fast Internet access not just at home but everywhere. The day of the Star Trek communicator is at hand.
Over its relatively short lifetime the Internet has been transformed from an interesting technology used to share expensive mainframes to an essential component of everyday life most of us take for granted.
February 2011 marked an important milestone in Internet history, IANA issued the last IPv4 address blocks to the regional registrars. IPv4 address space is limited to 4 billion hosts. Various methods have been implemented to extend its lifetime but the address space is now exhausted. IPv6, the next generation Internet protocol, has been around for years but because it is not backward compatible the adoption rate has been painfully slow.
provides an overview of the various technologies used to deliver Internet
access and the role played by the Internet Service Provider (ISP).
Table of Contents
Internet popularity is driving demand for ever-faster service, and exerting downward pressure on price. Connection between end user and ISP is often called the last-mile. This implies there is a magical entity out there called “The Internet” and customers are passive consumers of Internet goodness. I prefer the term first-mile. It better denotes Internet value being the result each person’s connection as both contributor and consumer. Today most citizens in industrialized countries have access to some form of high-speed access. Broadband is increasing seen as a utility without which citizens are unable to fully participate in society.
Broadband is a much abused and inexact term. The United States Federal Communication Commission (FCC) is constantly redefining minimum broadband speed. It had been 200 kbps. Basic broadband increased requirement to 768 - 1500 kbps toward customer (downstream). In July 2010 the National Broadband Plan increased minimum speed to 4Mbps toward customer and 1 Mbps up. In Feb 2015 the FCC raised the definition of broadband to: 25Mbps down 3Mbps up.
Most of us utilize an Internet Service Provider (ISP). The ISP owns leases or otherwise has access to a connection to each customer. The picture below provides a high level overview of how ISPs connect customers to the
Figure 1 ISP Functional Block Diagram
Connecting to an ISP would not have much value if the only people you can communicate with are other customers. To provide worldwide connectivity ISPs connect to other ISPs at peering points. This allows traffic be delivered anywhere in the world.
ISPs exert a great deal of control over how customers use the Internet. Much is made of Internet robustness and redundancy. That is true of the Internet in general but for most of us the ISP acts as the on-ramp gatekeeper, limiting how it can be used. In most locations broadband competition is nonexistent or extremely limited. ISP business policy has significant impact on how customers use the Internet and how new Internet services are deployed.
There are several essential functions that must be provided by the ISP, as they are the only entity capable of doing so. There are many services, often associated with ISPs, which can be provided by anyone. The distinction between essential and non-essential functions is important when discussing Network Neutrality. As broadband access becomes more pervasive ISPs and policy makers need to balance business considerations with public interest.
Essential Core Functions
· Customer Connection
· Customer Authentication
· IP Address Allocation
· Packet Routing
· Multicast (IGMP)
· Quality of Service (QoS)
· Service Level Agreement (SLA)
· Acceptable Use Policy (AUP)
· Customer Support
· Name Resolution (DNS)
· Web Hosting
· Cloud File Storage
· Virtual Private Network (VPN)
· Voice over IP (VoIP)
· Fixed Mobile Convergence (FMC)
· IP Radio
· IP Television (IPTV)
ISPs deliver a suite of services. When evaluating an ISP it is important to keep in mind which features are core functions, only the ISP can provide, and which are value add that can be provided by a third-party.
First and foremost the ISP needs to provide a method for customer to access the ISP network.
Some ISPs own the First-Mile access network; Cable and fiber to the premise (FTTP) are examples of this type of ISP. The ISP owns and manages the outside plant customer connection. DSL ISPs typically rent physical access to legacy copper phone line from Incumbent Local Exchange Carriers (ILEC) and collocate their equipment in the phone company central office.
Dialup ISPs use the Public Switched Telephone Network PSTN to connect customers. The ISP creates regional points of presence (POP) near the customer so customer is able to call a local ISP telephone number, avoiding per minute charges. The ISP in turn digitally terminates phone lines to support V.90/92 dialup speeds.
Wireless ISPs do not provide a physical connection at all. Rather they obtain a FCC license to use the public airwaves to connect to customers. This applies to both fixed and mobile wireless. Once the customer connects to a nearby ISP radio Internet backhaul is performed much the same as other ISPs.
Customer interface requirements differ greatly depending on type of service and whether or not the ISP provides the network access device. For example Cable, DSL, and FTTP ISPs typically provide the customer with a standard’s based modem with either Ethernet or USB customer interface. In the US T-1 is a tariffed telecommunication service. The FCC defined customer interface as two pair copper circuit typically implemented via a smart-jack. Dialup ISPs require customer obtain a V.90/92 or ISDN modem. Fixed wireless ISPs typically supply and install customer antenna and radio. Cellular providers often provide a subsidized smart phone when customer signs up for service. However this trend is in decline with customers often able to purchase an unlocked phone on the open market.
ISP provides either a routed or bridged customer connection. Residential accounts are typically bridged; customer connects to ISP as if they were part of the ISP LAN. VLAN techniques prevent users from seeing each other’s traffic. Business class accounts are typically routed rather than bridged. ISP’s edge router communicates with customer’s edge router. Routed connections are more flexible, but also more complex, then bridged.
The ISP needs a mechanism to insure only authorized customers connect to its network. For some types of service the link between customer and ISP is hardwired so any traffic appearing on the link is assumed to originate from customer. T1 and FTTP are typical hardwired connections. Shared media such as Cable and wireless need a way to identify customer. DOCSIS modems include a digital signature to prevent unauthorized access. ADSL ISPs typically use Point-to-Point Protocol over Ethernet (PPPoE) to authenticate customers. Telco’s like PPPoE because it facilitates support for third-party ISPs. Dial up ISPs typically utilize Point-to-Point Protocol (PPP) to authenticate customers using same RADIUS servers as PPPoE.
Each Internet host requires a unique address. ISPs typically provide residential customers with a single public IPv4 address. Large customers may obtain their addresses directly from Internet Corporation for Assigned Names and Numbers (ICANN) or from wholesale ISPs. IPv4 defines a 32-bit address space yielding about 4-billion possible addresses. That was a large number back when the Internet was limited to a few educational and government institutions but has become a serious limitation today. As a result IPv4 addresses are in very short supply. Next generation Internet protocol IPv6 increases address space to 128-bits, a truly humongous number. With IPv6 even residential customers are issued a large block of addresses.
IP addresses serve multiple functions. They denote a specific Internet host; each host needs an IP address. IP addresses also facilitate routing because they are allocated in blocks. If IP addresses were issued randomly each router would need to potentially look through billions of addresses to determine how to handle each packet. By aggregating addresses into large blocks routers only need look at a few high order address bits to determine how to forward packets.
Business accounts are typically configured statically. Static allocation is preferred for commercial accounts. With a static address customer settings are configured manually, based on information provided off-line by the ISP. This eliminates possibility of address change interfering with remote access.
Most residential accounts obtain IP address dynamically. This is convenient because it eliminates need for non-technical customers to manually configure IP address, subnet mask, gateway address and DNS server address. Dynamically assigned address may change at any time making it difficult to operate servers.
Due to the severe shortage of IPv4 addresses some ISPs are forced to issue customer private addresses and translate the private address to one of the public addresses assigned to the ISP. This is the same technique used by residential customer to share a single IP address with multiple computers. While this is fairly transparent to customer it prevents them from running any type of server and potentially causes problems if any of the customers engage in bad behavior. From the Internet’s perspective they all originate from the same IP address.
February 2011 witnessed a major milestone on the journey to
mass deployment of IPv6, IANA made the final allocation of IPv4
addresses. This event has been long anticipated but having finally occurred
ought to spur more rapid deployment of IPv6, the successor to IPv4. IPv6
represents a significant improvement over IPv4 but adoption has been painfully
slow. The reason is IPv6 is not backward compatible with IPv4. This is because
IPv4 has a 32 bit address space supporting approximately 4 billion hosts (4.3 x
109) IPv6 uses 128 bits for a mind boggling 340 Undecillion hosts
(3.4 x1038). The massive address space allows large blocks of
address to be allocated, thus easing routing and management.
Since IPv6 is not backward compatible ISPs offer a number of ways to support the transition.
1) Dual-Stack is probably the easiest to understand. The ISP provides customer with both IPv4 and IPv6 addresses. Customer equipment uses the appropriate version to communicate with the remote host, preferentially using IPv6. The down side of this implementation is the need to provide the customer with a public routable IPv4 address and the customer network gear has to support both IPv4 and IPv6. The lack of IPv4 addresses is why it is imperative the Internet adopt IPv6.
2) Dual-Stack lite The ISP provides only IPv6 addresses to customers, all traffic between customer and the ISP network is IPv6. When a customer accesses an IPv4 Internet host the customer’s router encapsulates the IPv4 address and transports it over the IPv6 connection to the ISP. The ISP uses carrier grade NAT (CGN) much like the way a typical home network shares a single IP address today. Customer’s router disencapsulates IPv4 packets and distributes IPv4 and IPv6 packets within the LAN. Just to keep life interesting the term CGN has been depreciated and it is now called large scale NAT (LSN) to more accurately reflect what the technique does.
3) Tunneling (6in4) is a way for IPv6 packets to be transported over an IPv4only network to another IPv6 network. This is probably not of interest for most readers of this paper but is very useful for companies with many locations that have adopted IPv6 internally.
A significant force driving IPv6 adoption is the cellular phone network, especially outside the US where the IPv4 shortage is more acute. The proliferation of smart phones means the network needs to hand out an IP address per person rather than per residence greatly increasing the number of addresses needed.
The term Internet is a contraction of Inter network. Internet is literally a network of networks. Routers are used to forward packets between networks. Devices know whether or not a host they are trying to reach is local. To access a remote host packets are forwarded to a router, called a gateway, attached to the local area network LAN. The router uses its knowledge of connection topology to make intelligent forwarding decisions. This process is repeated multiple times until packet finally reaches its ultimate destination. Routers learn connection topology by exchanging routing information. In the case of most residential customers this forwarding decision is trivial as there is only one connection to the Internet.
Signing up with an ISP would not be very useful if customer was limited to only communicating with other customers of the same ISP. The early Internet consisted of a few nodes interconnected by point-to-point links rented from the old Bell System. As the Internet grew it became apparent there was a need for a high-speed data network to interconnect high usage nodes. Transit providers span continents and oceans providing the backbone. Transit providers exchange traffic with each other and accept traffic from ISPs. Large companies, ISPs and governments often connect directly to one another, called Peering, eliminating the need to use a transit provider for some traffic. Smaller ISPs purchase bandwidth from third party wholesale suppliers. The end result is regardless how one connects it is almost always possible to communicate with anyone else on the Internet.
This drawing is very simplified; typically all but the smallest ISP will have multiple connections to various transit providers and often peering connections to other ISPs. Routing protocols chose the best route to deliver each packet. One of the network neutrality concerns is that ISPs will choose less congested routes for partners resulting in slower performance if a customer is accessing a non-preferred site.
Figure 2 Peering
Internet is a powerful communication medium. A user is able to connect to another host anywhere in the world virtually instantly. As powerful as this type of communication is it is not well suited for broadcast, delivery of one program to many subscribers simultaneously. Traditional broadcast business model grew out of the technical limitation of radio. Station owner built a transmitter and anyone within range was able to receive the broadcast.
The one-to-one connection model used by the Internet makes it difficult to cost effectively broadcast programs since each listener requires a unique network session. Internet Group Management Protocol (IGMP) creates the infrastructure to deliver a single stream to multiple users. At each branch a decision is made whether or not to forward the stream. If an active listener is downstream packets are forwarded, if not they are dropped. This conserves channel capacity by suppressing streams no one is listening to. IGMP dramatically reduces server load since only a single copy is transmitted. Internet broadcasting is still in its infancy and IGMP is not commonly implemented by ISPs. For multicast to function each router between sender and receiver needs to support IGMP.
Internet is an egalitarian best effort network. This works amazing well for transferring large chunks of data from point A to point B. The network continues to operate in the presence of all sorts of impairments and failures. However: best effort does not work well with latency critical applications such as telephony and streaming media when dealing with network congestion. For example a Voice over IP (VoIP) phone call requires round trip latency under 150ms. Excessive delay makes carrying on a conversation difficult and when extreme virtually impossible. On the other hand if a print job is delayed a little no one is likely to notice as long as it completes successfully.
When a switch or router encounters congestion it buffers incoming packets until it is able to forward them. Normally this occurs on a first in first out (FIFO) basis. Quality of Service (QoS) metric allows latency sensitive packets to receive priority queuing. This simple strategy works well if latency critical traffic is a small percent of total. QoS marks packets with a (Diffserv) priority level. When congestion occurs higher value packets are delivered first. Lower value packets are delayed or discarded during periods of extreme congestion. QoS service allows more graceful degradation by moving high priority packets to the head of the queue.
As discussed in a later section traffic shaping and preferential packet treatment is controversial. Network Neutrality proponents are concerned ISPs will strike business deals with partners to preferential deliver their data at the expense of competitors. It is important to remember Quality of Service mechanisms do not provide additional channel capacity. They simply redefine winners and losers. When channel capacity does not meet “offered load” (an old telecom term) some policy must be in place to deal with congestion. The PSTN managed congestion by withholding dial tone or returning an “all trunks busy” message when call could not be completed. The Internet handles congestion by delaying packets or in extreme cases dropping them. QoS controls which packets get delayed. Many argue deploying additional capacity is more cost effective then implementing a complex differential service mechanism.
To be maximally effective QoS requires end-to-end deployment. Technical and business problems facing QoS is much the same as IGMP. There is little value until “everyone” deploys it and little incentive to be an early adopter. ISP and all intermediate nodes need to monitor packet privilege level and treat them accordingly. Controls at each level need to monitor statistics to prevent “tragedy of the commons.” If too many packets ask for priority handling they all suffer.
Most residential broadband service is asymmetric; download is much faster than upload. There is benefit in shaping upload traffic so higher priority traffic is treated preferentially at the edge of the customer’s network. Customer’s edge router examines outbound packets and prioritizes them. Many residential routers already do this to a limited extent giving TCP/IP ACKs preferential treatment. Similar treatment may be applied to VoIP or critical gaming packets.
One of the main differences between residential and business accounts is the Service Level Agreement (SLA). SLA defines things like: minimum speed, maximum latency, service reliability and mean time to repair. The SLA imposes performance guarantees ISP must meet and penalties if they do not. This is one of the reasons business class service is so expensive. Residential accounts are typically best effort. If connection fails or experiences congestion ISP is under no obligation to correct problem on an expedited basis.
Acceptable use policy (AUP) defines customer responsibility, how service may be used and penalty for misuse. For example, residential customers are typically prohibited from reselling access or running servers and ISP often block certain types of traffic. In an attempt to reduce cost some residential ISPs impose usage caps to limit monthly download and upload. Most ISP’s reserve the right to revise the AUP at any time making for a pretty one-sided contract.
CALEA passed in 1994 and has been greatly expanded over the years. It requires the ISP to install special equipment to facilitate wiretapping of customer’s digital traffic by law enforcement. Originally it was limited to voice traffic but has been expanded to include all ISPs. There is a lot of pressure on ISPs to retain customer web browsing history and to make it available to law enforcement and antiterrorism agencies. This has been especially prevalent in Europe but is also happening in the US.
Regardless of how good service is on occasion will be necessary to contact technical support to resolve problems. Tech support responsiveness dramatically affects overall customer satisfaction.
Most residential broadband providers offer only limited help in troubleshooting problems. Finger pointing can be frustrating when a customer is trying to resolve a complex interaction and ISP does not consider it their responsibility. Specialized web sites such as DSLReports can be an effective alternative. DSL Reports is a good example of an Internet community; members post questions and assist each other in dealing with network issues.
ISPs would not stay in business long if they could not charge for service. During the Dotcom era some dialup ISPs offered advertising supported free access, those companies are long gone.
Most ISPs offer flat rate billing based on speed tier. Monthly cost is based on connection speed not how much the service is used. Some ISPs set monthly bandwidth consumption quotas, exceeding monthly cap results in an extra charge or a reduction in speed. Caps are controversial because measurement tends to be inaccurate and they have little to do with the cost of providing service. Caps are pretty common for Cable and wireless providers, often imposing a significant surcharge. Usage based billing is very common for Cellular service. Andrew Odlyzko has written extensively about customer pricing preferences – what people are willing to pay for and how they prefer paying for it.
There is no comparable notion of telephone long distance in the Internet world. It does not cost any more to access a web site cross the street as around the world. Back in the early days of the telephone it was very difficult and expensive to transport calls over long distances. The advent of fiber optic technology has reduced transmission cost so it represents only a small fraction of the cost to deliver Internet access. The distance independent paradigm of the Internet is beginning to affect how traditional telephone calls are billed. By way of example the bundled wireline phone service provided by my ADSL CLEC does not impose per minute charges for domestic or Canadian phone calls. The monthly bill covers unlimited Internet and domestic phone usage.
This section examines services often provided by ISPs but that can be provided by third parties or in some cases even the customer. This distinction is important in the Network Neutrality debate. If an ISP decides to offer a non-standard or value-add service and customer or third party is able to supply a similar service the impact is dramatically different than if the ISP implements a proprietary core service.
I struggled with whether to put DNS in the essential or non-essential session.
The Domain Name System (DNS) translates Uniform Resource Locator (URL) to IP address. Without DNS web sites would have to be accessed by IP address. DNS is unique in that it is the only fully distributed database in existence. DNS name space is evaluated right to left. Naming convention begins with an implied “.” at the extreme right of the top level domain (TLD), the root domain. Next in the hierarchy are the TLDs (com, gov, edu, uk, ru), then registered domain name (tschmidt is my registered domain within the .com top level domain), then one or more sub domains. As each level is traversed it provides information about then next lower level until ultimately the IP address of the particular host server is determined.
If DNS is unable to resolve a domain name it returns an error message. Some ISPs have attempted to monetize incorrect URL entry by returning advertising supported web page if the URL cannot be resolved. DNS redirection is controversial. Some customers may find redirection useful, other not.
There are lots of public DNS servers available if you do not like the one provided by your ISP. They can also be handy for troubleshooting if your ISP is experiencing DNS problems. For many years I used the popular TreeWalk program to run my own DNS resolver. The web site has expired so I can no longer recommend using it. Gibson Research has a handy DNS benchmarking tool to test performance of multiple resolvers.
So why did I say I struggled with this topic, since it is obvious you do not have to use your ISP’s DNS server? The issue is content distribution networks (CDN). CDNs cache content physically close to the end user, sometimes even at the ISP data center. Using a DNS resolver other than the one provided by your ISP can actually degrade performance. This is because the DNS server will be unaware of any private arrangements between the ISP and CDN and the physical location of the public DNS server is likely significantly different then the ISP DNS. The result is the public DNS server will return the IP address of a non-optimum CDN caching edge server degrading performance.
Just about all ISPs provide email. It is wise to consider ISP e-mail account a throwaway. If you change ISP or the ISP is sold your email address changes making it difficult for folks to stay in touch. For a more permanent address use one of the free e-mail services such as Yahoo or Gmail or better yet register your own domain.
One useful way to use ISP email is for home automation devices. We have several that send notification emails, either at a fixed time of day or due to certain events. Sending these emails from your ISP account to another email account is a great way to verify both are operating properly.
Usenet Newsgroups are a valuable source of up to date information. Usenet is text based and predates the web. Most ISPs used to include Usenet access. Due to declining interest in Usenet and legal attacks related to pornography many ISPs are eliminating Usenet support. Usenet access is available from a number of specialized companies. Usenet Compare has a nice comparison list of newsgroup providers.
Many ISPs provide web site hosting for residential customers. This allows customers to have an Internet presence without having to register a domain name or run their own server. ISP runs a virtual server enabling many web sites to run on a single computer. ISP web hosting is a boon to residential customers by providing a painless way to create a web presence. As with email use of the ISP web server binds customer’s web site to the ISP. There are many hosting alternatives that decouple personal web sites from the specific ISP.
The cloud is the new buzzword for outsourcing services over the Internet. Many ISPs offer some form of network storage either as part of the plan or as an extra cost add on. Storing your information on the Internet means you can access it from anywhere without the need to run your own server and if your house burns down or computer crashes your data is safe. On the other hand the fate of your data is in the hands of others.
Virtual Private Network (VPN) uses the public Internet to create private communication paths. Depending on how it is implemented it may be a feature that only the ISP is able to deliver or something the customer or third-party is able to engineer. Once the province of large companies VPNs are attractive for any customer that needs to securely access their network remotely.
Large companies make extensive use of MPLS to implement a geographically dispersed corporate LAN. To users, regardless of location, resources appear to be on the LAN. Service provider configures edge routers such that data presented to it is delivered to the correct physical location. ISP isolates each company’s traffic so in is invisible to other companies.
Figure 4 MPLS VLAN
More MPLS/VPN details in this Network World article.
It is also possible for customers to create their own VPN using IPsec. In this case customer, rather than service provider, creates a secure end-to-end path through the public Internet. IPsec is used extensively to support satellite offices and telecommuters.
SSL/TSL is another mechanism used to provide end-to-end privacy. SSL was originally developed by Netscape to protect web based financial transactions. Because it is built into all browsers many companies are using it, rather than IPsec, to provide remote employee access.
Public switched telephone network (PSTN) represents a hundred years of engineering. Recently packet based telephony has become a serious contender. Rather than traditional circuit switching Voice over IP (VoIP) uses packet-based communication to deliver two-way real time voice. Voice communication is very demanding. Voice data rate is low by Internet standards only 8-64 kbps in each direction. However latency is critical. If packets are delayed more than a few hundred milliseconds voice quality is seriously degraded.
Figure 5 Voice over IP
As with any new technology many players have entered the market. Most will fail but a few will succeed. If your ISP offers VoIP check the service thoroughly. The asymmetric nature of most residential service, upload being much lower than download, makes it easy to saturate the connection. Quality of Service (QoS) may be required to mark VoIP packets, as high priority so they get preferential treatment.
In the US the FCC mandates telephone number portability. In most cases you will be able to transfer your existing wire-line phone number to new VoIP phone service or to another wire-line carrier. I recently switched our wire-line phone and DSL Internet from the incumbent phone company to a competitive exchange carrier. Number portability allowed us to maintain the same land line phone number we have had for many years. We also took advantage of number portability when we switched cellular providers.
Voice over IP represents many challenges for E911ememgency service. Unlike wire-line POTS where telephone location never changes, a VoIP call can originate anywhere. Cellular networks have struggled for years to implement E911 service using triangulation or GPS to locate subscribers.
There is tremendous interest in multimode cellular phones able to utilize both traditional cellular network and opportunistically, Wi-Fi networks. Fixed Mobile Convergence (FMC) represents a win-win situation for both customer and wireless provider. For providers it utilizes the vast potential of the Internet and private LANs to remove traffic from expensive cellular radio networks. For customer it represents potentially lower cost and improved performance. For business it represents a way to eliminate traditional PBX wired telephone infrastructure without paying extravagant per minute charges. Depending on national legal restrictions it may offer arbitrage advantage for multinational corporations to treat voice like email, bypassing local phone companies and eliminating per minute charges.
Figure 6 Fixed Mobile Convergence
An alternative to Wi-Fi is femtocells being offered by several Cellular phone companies. Femtocells are low power cellular base stations that utilize customer’s broadband connection to deliver coverage to a single home. As with Wi-Fi Cellular providers like it because it moves traffic off cell stations.
At this point it is unclear what if any role the ISP will play. FMC looks like any other real time traffic to the ISP. Our cell phone provider Republic Wireless has been aggressive developing this technology. It works extremely well for us in terrain challenged NH where we have poor cell phone coverage at home.
A difficult problem is seamless roaming between networks. To a limited extent this is already being done by Wi-Fi as a user moves between Access Points. However for this to work all APs must be under the same administrative control. In an ideal world a device associates with a network and as it moves it automatically reconnects to the best network at the new location seamlessly without any interruption in service. As an example a imagine a user beginning a Wi-Fi session at home, gets in their car and moves out of range and is handed off to the Cellular network. They stop for breakfast and are back with range of a different Wi-Fi network, and lastly they arrive at work and now join the corporate LAN. The IEEE 802.21 media independent handover services working group tackled this difficult problem.
ISPs do not appear much interested in becoming content aggregators for radio the way they are for TV. But other than much lower bandwidth requirement Internet radio is not much different than Internet TV. Radio-Locator is a convenient way to find Internet radio stations.
Over-the-air (OTA), Cable and DBS TV all use basically the same transmission scheme. RF spectrum is divided into channels. US TV channels are 6 MHz wide, in Europe 8 MHZ. Channels were initially specified to carry a single analog standard definition TV program. Migration to digital transmission allows each channel to carry multiple high definition (HDTV) and/or standard definition programs (SDTV).
IPTV represents a fundamentally different way to deliver TV leveraging packet-based technology. IPTV opens the door to demand based programming. The traditional broadcast model is one-to-many, an artifact of radio transmission. Once a transmitter is set up anyone within range is able to receive the program. Video on demand (VoD) is like going to the library, rather than changing channels. One simply selects the program of interest and it is delivered virtually instantly anywhere anytime to any device the end user chooses.
Using MPEG-2 compression SDTV requires about 2 Mbps and HDTV 15 Mbps. MPEG-4 yields significantly lower data rates for equal image and sound quality. These rates are the result of spectral (within the picture) and temporal (over time) data compression. Raw data is much too high to be delivered economically.
Video on demand represents many challenges compared to traditional broadcast. Each user is able to start/stop the program at any time requiring a discrete program feed to each user rather than a single feed to all users as with broadcast.
Historical residential ISP assumed customer traffic model of primarily bursty download traffic such as loading web pages or accessing email. Streaming TV and to a lesser extent streaming radio lock up significant bandwidth for extended periods of time. This is much more demanding than browsing.
IPTV dramatizes the disruptive nature of the Internet. Since the end of WWII Cable companies have wired areas to deliver broadcast TV over coax and more recently fiber. Cable network is intimately bound to TV delivery. As residential broadband speed increases the door opens for new providers to bundle content and deliver it without the need to either build or own the means of local delivery. ISPs are worried about being relegated to commodity bandwidth providers.
Basic to the design of the Internet is the notion of direct end-to-end communication. When Computer A wants to exchange data with Computer B routers between the two move packets the most efficient way they can on a packet by packet basis. The popularity of streaming video services like YouTube and Netflix stresses the network as millions of users access the same content from diverse locations.
Video is very bandwidth intensive. Video on demand requires a one-to-one connection between user and server as opposed to the one-to-many model of traditional broadcast. Being demand based each user may be viewing a different program or different time within the same program. To address the growing interest in video on demand (VoD) specialized service providers, called Content Delivery Network (CDN), have become popular. The CDN replicate programs on many caching servers and locates them near the ultimate end user. Often they have special peering arrangements with large ISPs or are located within the ISP’s data center itself. CDNs reduce the amount of traffic flowing over Internet transit network because they are able to source the file near where it is being viewed. When a customer requests a particular program the ISP’s DNS servers return the address of the local caching server to most efficiently stream the program to the customer.
Historically when a customer contracted with an ISP they were given a block of IP addresses large enough to meet their needs. IPv4 address shortage forced ISPs to rethink how they allocate scarce addresses. Most residential broadband ISPs restrict customer to a single IPv4 address. This creates a quandary; how to cost effectively connect multiple hosts to the Internet? The most common workaround is Network Address Translation (NAT) coupled with use of private IP addresses. RFC 1918 reserves three blocks of IP addresses guaranteed not used on the Internet. Because these addresses are not used on the public Internet they can be reused multiple times.
Combining NAT, more properly Network Address Port Translation since both address and port number are modified, and private addresses allow a virtually unlimited number of computers to share an Internet connection even though the ISP only provides a single address. NAT provides translation between private addresses on LAN and single public address issued by ISP on WAN.
NAT only affects non-local communication. When a request cannot be serviced locally it is passed to the NAT router, called a gateway. The router modifies packets by replacing private address with public address issued by ISP and if needed modifies port number to support multiple sessions and calculates a new checksum. The router sends the modified packet to remote host as-if-it-originated-from-the-router. When router receives the reply the modifications are reversed and the packet forwarded to the originating host. Router tracks individual sessions so multiple computers are able to share a single address. From the Internet’s perspective local hosts are invisible. The router looks like a single computer with the address of the public IP issued by the ISP.
IPv6, with its vast address range, does not require NAT. Each device will have its own public IP address. This changes the nature of residential routers. NAT, though not technically a firewall, blocks all incoming connection requests from remote hosts. Unless specifically programmed with port forwarding rules it does not know which device on the LAN to forward the request. This default behavior is lost with IPv6. Residential routers that support IPv6 should block incoming connection requests unless specifically programmed otherwise.
Internet is designed as a transparent end-to-end bit delivery network. This means any host is able to communicate with any other host. TCP/IP and UDP/IP use ports so a host can manage multiple simultaneous sessions. Ports are 16-bit unsigned values yielding up to 65,535 ports for each connection type. When a service is defined a port number is selected for initial contact. This is called the well-known port. For example the well-known port for HTTP Web access is 80. When a remote user attempts to connect it sends the request to TCP port 80. Once the initial connection is established both computers agree to a use a different combination of ports for ongoing communication. An analogy is to think of well-known port as a doorbell. If ISP blocks access to well-known port remote users are unable to connect.
It is common practice for residential ISPs to block incoming port 80 to prevent customers from running web servers, port 25 to send email to prevent spam, and ports 137, 138, 139, and 445 to prevent remote access to Windows LAN based SMB file sharing. In an effort to reduce file trading some ISPs throttle or block ports used for peer-to-peer (P2P) file trading applications. Impact of blocked ports varies.
To get around blocked port it is easy to reconfigure server to use a non-standard port. If access is limited to a small group of friends it is easy enough to simply inform everyone which port to use. If goal is wider public access use of nonstandard ports is a problem. Without knowing the port number remote users are unable to connect. URL forwarding is a technique to work around this restriction.
Residential ISPs made assumptions about typical customer usage when they set monthly charges and designed infrastructure. Business model assumed bursty data flow predominantly web browsing, email, and occasional file download. Proliferation of Peer-to-Peer (P2P) file trading and streaming video services, such as YouTube and IPTV upset these assumptions. ISPs are struggling to carry more traffic than originally planned.
Some ISPs are responding with traffic quotas. When customer exceeds quota either speed is reduced or additional charge incurred. There have been numerous stories of unwitting customers being billed for thousands of dollars in overage charges on their cell phone data account. One the other hand some ISPs detect undesirable traffic and throttle speed rather than blocking it entirely.
ISPs often justify usage based pricing as a way to control congestion; however congestion is a temporal phenomenon having little to do with aggregate usage. Congestion only occurs when instantaneous demand exceeds capacity. As has been well documented usage Caps are really being used to generate additional revenue or to protect legacy business models.
The proliferation of digital media devices and networking is making the traditional media world nervous because digital technology allows rapid lossless copying. From a technology standpoint the digital rights management (DRM) mechanisms used to prevent this have been a spectacular failure and in some cases have actually caused damage to end-user devices.
A recent concern is Apples removal of the analog headphone jack from their smart phones. This is widely seen as a way to extend DRM end to end. While the industry loves DRM it often prevents users from accessing content they have legally obtained.
Some ISPs use a technique called Deep Packet Inspection (DPI) to determine how customer is using the Internet and block or throttle use they deem harmful. DPI can also be used to obtain additional information about customer’s Internet usage. This data is of interest to targeted marketing vendors. The use of DPI falls into a grey area of what is and is not acceptable ISP behavior. In addition many governments want to know about what their citizens are doing and press ISPs to track customer usage.
In the quest for ever-faster speed it is important not to lose sight of the interplay between speed and latency. As an example a truck carrying DVDs exhibits very high speed (bits per second) once it arrives but also high latency because it takes hours or days for the data to arrive. Round trip latency is defined as time it takes a packet to go from source to destination and back again. Factors affecting latency are: connection speed, modem overhead, distance, propagation speed, and network congestion.
Modems operate on “chunks” of data increasing latency because entire block must be processed before being passed to next stage. Data cannot be used until the last bit in the bock is received. DSL modems often use a technique called interleave to reduce sensitivity to transient noise. This is effective in maximizing robustness by reducing effect of errors but adds latency because it operates over a larger data block. Low speed connections such as dialup often use smaller packet size to minimize this effect.
Light travels 186,000 miles per second in vacuum. Optical fiber is somewhat slower about 70% of light in vacuum. A packet traveling the 3,000 from New York to LA takes about 25 ms in each direction. To this one must add delay at each router between source and destination. Normally this delay is negligible but if network becomes congested router must temporally store incoming packets until outgoing path is free. In extreme cases router will discard packets. When packets are lost upper level protocol either requests retransmission (TCP/IP) or in the case of streaming data (UDP/IP) fakes missing data.
Impact of latency is heavily dependent on data type. Interactive use such as gaming and Voice over IP (VoIP) telephony place stringent demands on latency but do not require much bandwidth. File transfer on the other hand is relatively insensitive to latency but places great importance on speed.
Most residential broadband service is asymmetric: download is much faster than upload. This is done for technical and business reasons. Asymmetric speed allows ISP to position residential service differently than business and charge higher fee for business class service.
End user LAN is rarely the determinate of Internet speed as wired and wireless LAN performance normally greatly exceeds Internet access speed. Speed is typically limited by first-mile WAN connection. It can be a challenge teasing out various components of end-to-end performance to see if ISP is working as advertised.
IP transmission splits data into 1500 byte chunks called packets (1-byte = 8-bits). Some of the 1500 bytes are used for network control so are not available for user data. TCP/IPv4 uses 40 (TCP/IPv6 60 bytes) of the 1500 bytes for control. NOTE: this analysis assumes use of maximum size packets. Since overhead is fixed using smaller packet incurs higher overhead percentage. With 40-bytes reserved for control out of every 1500-bytes sent only 1460 are available for data. This represents 2.6% overhead.
Some ISPs, typically phone companies, use a protocol called Peer to Peer Protocol over Ethernet (PPPoE) to transport DSL data. This is an adaptation of PPP used by dialup ISPs. Telco’s like PPPoE because it facilitates support of third party ISPs as mandated by the FCC. PPPoE appends 8-bytes to each packet increasing overhead to 48-bytes reducing payload to 1452. Where PPPoE is used overhead is increased to 3.2%.
DSL connections typically use Asynchronous Transfer Mode (ATM) (AAL5) to carry DSL traffic. ATM was designed for low latency voice telephony. When used for data it adds significant overhead. ATM transports data in 53-byte Cells of which only 48 are payload the other 5 are control. Each 1500-byte packet is split into multiple ATM cells. A 1500-byte packet requires 32 cells (32 x 48 = 1,536 bytes). The extra 36-bytes are padded, further reducing ATM efficiency. 32 ATM cells require modem transmit 1,696 bytes of which only 1452 carry payload. Where ATM/PPPoE is used overhead is increased to 14.4%.
It is easy to determine best-case file transfer rate if modem data rate is known. Broadband marketing rate may not the same as modem transfer rate. Some Telco’s set transfer rate higher than marketed speed to compensate for overhead. That way speed test result will be close to marketed speed. Most broadband modems have status page allowing user to observe true transfer rate. This is the rate modem connects to ISP not speed computer connects to modem or router which is typically 10 Mbps, 100 Mbps or 1 Gbps.
Sync rate of my FirstLight ADSL service is 7002 kbps down and 996 kbps up. FirstLight uses DHCP so there is no PPPoE overhead but does use ATM. Best case speed for my connection is 6,036 kbps down and 859 kbps up. Actual speed test results reported by DSL Reports and Speedtest.net are shown below.
TCP requires receiver periodically send an Acknowledge to let sender know everything is OK. If the transmitter has not received acknowledgement after it sends a number of packets it stops transmitting and waits. This is called the receive window. For high speed connection or where latency is high default receive window (RWIN) should be increased to prevent pauses in transmission. Most modern Operating Systems do a good job optimizing RWIN so little is gained by changing it.
The other important tweak is packet size, called the maximum transmission unit (MTU). Maximum packet size is typically limited to1500 bytes. Normally this setting is fine for broadband access, dialup uses a much lower MTU typically 576. PPPoE encapsulation adds 8 bytes to each packet. This reduces maximum packet size to 1492 bytes. If sender attempts to transmit a larger packet it will either be rejected or fragmented into two parts, with attendant performance degradation.
With load balancing a router with multiple WAN ports is used to share the load. As connection requests come into the router from the LAN it determines which link to use based on link capacity and loading. A given session is constrained by the speed of whichever link it is assigned. Aggregate performance is increased because the router parcels out requests to all the links. A typically web page consists of dozens of separate HTTP sessions to different servers. Load balancing will help in that case. If you are downloading a video load balancing will have no effect.
Bonding is transparent to IP it looks like a single faster pipe. Bonding requires cooperation between the ISP and customer where load balancing can be performed unilaterally by the customer. In the case of DSL bonding is typically performed by the ATM layer that splits data among multiple ATM streams. DOCSIS3 modems do something similar allowing the ISP to allocated more than one channel for Internet delivery.
While bonding is able to dramatically improve speed based on the number of connection it has little if any effect on latency. The reason is the modem processing that must occur at each end is the same and even though it is invisible to IP bits need to travel over multiple paths and be reassembled before they are handed off to IP.
Most residential accounts are configured automatically each time customer connects. Dynamically assigned IP address makes it difficult to run a server because address may change at any time preventing remote users from connecting until they learn new address. Dynamic DNS service provides a workaround to run servers on dynamic accounts. A daemon runs on either the router or server to detect address changes. When a change occurs it notifies DNS service which in turn automatically updates A records for the site. Even with automatic DNS update there will still a period of time after the address changes where server is not accessible and active sessions are aborted. Dynamic DNS services are really only suitable for casual personal servers, not business use.
Broadband service is marketed as “always on.” Exactly what this means is subject to interpretation. The most “on” service is a bridged or routed connection configured with a static IP address. Once service is configured connection is permanent and always available until the next time the ISP needs to reallocate IP addresses or power fails.
Dynamic Host Configuration Protocol (DHCP) assigns client an IPv4 address for a limited period called a lease. Before lease expires client attempts to renew. As long as ISP continues to renew the lease the user is never disconnected. From customer’s perspective service is always on, lease renewal is transparent. Some ISPs bind IP address to hardware MAC address. The same IP address is assigned as long as customer does not change equipment. IPv6 uses a somewhat different mechanism DHCP-PD or Router Advertisement but the end result is the same, customer equipment is automatically configured by the ISP.
Point-to-Point-Protocol over Ethernet (PPPoE) or ATM (PPPoA) works like traditional PPP dialup. This type of service is common for ADSL. It leverages ISP investment in RADIUS authentication and billing equipment. Customer provides username/password to authenticate, once authenticated ISP issues an IP address. If connection becomes idle the user is disconnected. Most residential routers include a keep-alive mechanism so connection is never disconnected. From the user’s perspective the connection is always on as long as the ISP is able to maintain an active RADIUS log in session.
Some ISPs limit maximum connect time. After a certain number of hours connection is dropped and must be reestablished. This sort of behavior is common for dialup ISPs and Wi-Fi Hotspots. When connection is dropped customer must log in again to regain Internet access.
Internet is a rough and tumble world often likened to the Wild West. The power of worldwide connectivity means anyone on the planet with an Internet connection is in a position to attack another connected computer. ISPs often block certain ports to reduce danger to unsophisticated users. Port blocking is a double edge sword as it may interfere with customer’s legitimate use of the Internet. Some ISP's go further acting as a firewall protecting customer from hostile attack and examining email for dangerous content or attachments. Some users consider this a great feature in the battle against spam and viruses. Others see it as an unwelcome intrusion in what should be individual control of network access.
Popularity of wireless networks raises additional security concerns. In a wired network an attacker must physical connect to the network. With wireless an attacker is able to eavesdrop from some distance away. This is especially worrisome with Wi-Fi hot spots since they are in public places and the integrity of owner is often unknown. When using public Hot Spots one should be careful accessing any resource over a wireless network where passwords are exchanged in the clear. Specifically email as POP/SMTP credentials are sent in the clear. If at all possible use SSL authentication to access email accounts. At home use Wireless Protected Access WPA2 with a strong password to protect privacy. WPA2 provides robust over the air encryption.
IPv6 addressing presents another possible security issue. One of the addressing schemes uses the 48-bit MAC address for the low order bits of the 128-bit IPv4 address. This means hard coded machine MAC address that is normally not visible outside the LAN in IPv4 becomes part of the pubic IP address and remains the same even when connected to a different ISP. A solution to this problem is to have the computer use a random number rather than the MAC address.
As Internet access becomes pervasive there is growing tension between ISP business practices and public policy. ISPs are concerned about being relegated to commodity bandwidth provider. As such they are frantically trying to create business relationships with select third parties to offer bundled services.
Network Neutrality proponents are concerned ISPs will created walled gardens and be in position to favor some companies and disadvantage others. Opponents of Network Neutrality argue ISPs ought to be able to do anything they want with their own network.
The reason I went into so much detail earlier about required and optional ISP services was to identify those services that only an ISP is able to deliver. Network Neutrality ought to insure network transparency is maintained, innovation encouraged and ISP allowed to offer value add services while being prevented from acting as gatekeeper. Internet’s rapid rise in popularity is the result of its open architecture. Entrepreneurs need to be able to create new business models and interact with customers without requiring permission or cooperation of the network owner.
Many states are participating in the national broadband mapping program to determine broadband availability. The NH the program is called cleverly enough: NH broadband mapping and planning program. NHBPM is working to deliver more accurate and detailed data on a town by town basis and has a speed test to record actual customer speed. The ease of use and data quality varies a lot by state.
Dialup has come a long way from Bell 103 acoustic modem operating at 300 bps to current crop of V.90/92 modems capable of over 50,000 bps. Dialup Internet access is available anywhere there is telephone service. It will even work on cellular at very low speed in a pinch. Almost all Dialup ISPs support ITU-T V.90/92 standard. V.90 modems deliver up to 56 kbps (download) over the PSTN. In the US FCC power limitation reduces maximum speed to 53 kbps. V.90 transmission from subscriber to ISP (upload) uses V.34 mode limiting maximum upload speed to 33.6 kbps. If modem cannot connect in V.90 mode it automatically falls back to V.34 mode in both directions with a maximum speed of 33.6 kbps.
V.92 is a minor enhancement to V.90. Upload speed is increased slightly to 48 kbps and implements faster auto negotiation to reduce call setup time. V.44 improves compression of reference test data to 6:1 vs 4:1 with V.90. Compression increases apparent speed because it reduces the number of bits transmitted over slow telephone network. Modem on Hold (MOH) allows modem to park a data session allowing user to answer a short incoming call. This works in conjunction with Phone Company Call Waiting feature and requires support from the ISP.
V.90/92 requires ISP modem connect to phone company digital trunk. Only a single digital to analog conversion can exist between ISP and user. Phone lines are analog between customer and central office or remote terminal. At that point they are digitized at 64 kbps. This means POTS modem technology has reached its theoretical maximum speed. To obtain higher speed requires use of different technology.
At connect time modem probes phone line to determine noise and attenuation characteristics in order to set initial connect speed. Speed is constantly adjusted in response to varying line conditions. To obtain maximum speed V.90 and V. 92 modems require phone circuit that exceeds minimum FCC requirements.
Dialup networking (DUN) is used to establish an Internet connection. The most common method used to traverse the telephone network is via Point-to-Point Protocol (PPP). PPP allows Internet Protocol (IP) packets to traverse the serial point-to-point telephone link between user and ISP. DUN automatically dials ISP phone number, waits for modem to connect and establishes PPP session. The ISP performs user authentication and assigns an IP address. DUN monitors the connection and notifies user when it disconnects. In Windows, Internet Explorer can automatically activate DUN when attempting to connect to a web site.
Dialup ISP business model assumes customer stay connected for relatively short periods of time. To enforce this most ISP’s automatically disconnect customer when limit is reached. Session will also be dropped due to extended inactivity.
In the quest for higher speed some dialup ISPs support Multilink. Multilink binds two dialup links into a single faster connection. If customer typically connects at say 44 kbps multilink doubles speed to 88 kbps. Multilink requires two modems; two phone lines, and an ISP that supports it. Where available it is a useful technique to obtain better performance from dialup.
Software at each end of the link splits data between each connection effectively doubling speed. Unfortunately because data is still traveling over low speed dialup multilink does not improve latency.
Call waiting generates an alert tone to inform the user someone else it attempting to call. The call waiting process interferes with an existing data call. Call waiting can be temporally disabled at the beginning of a call. The sequence varies by locale, in our area it is *70. Unfortunately sending the disable sequence to a line not equipped with call waiting is interpreted as part of the dialed number, resulting in an incorrect connection. This is a problem if the modem uses multiple lines and not all are equipped with Call Waiting.
If dialup modem shares a phone line with telephone or fax machine there is possibility of mutual interference. If modem is in use picking up a phone will cause modem to disconnect. Conversely if phone is in use modem may attempt to connect interfering with call. One can use a privacy device that monitors phone line voltage to prevent this. When phone is idle open circuit voltage is high around 48 volts, when a phone/modem is in use voltage drops to less than 10 volts. Privacy adapters measure line voltage to prevent phone use if a call is already in place. There are a couple of inconvenient side effects to this approach. Privacy device prevents calls being transferred from one phone to another and it confuses line use indicators built into many phones. I designed a Modem Access Adapter to prevent interference when modem and phone share the same line.
Other than requiring a V.90/92 modem there is no installation. PCs, especially laptops, used to include a built in dialup modem. With the advent of Ethernet and Wi-Fi that is typically no longer the case and will need to purchase a dialup modem and connect it to the phone line. Once that is done create a DUN profile and log into the ISP.
Dialup has the advantage of being accessible anywhere there is a landline phone. It can even be shared by multiple users on a home LAN. For several years in the late 90’s I shared a dialup connection on our home LAN, first using a connection sharing program and later a router. The problem with dialup is its incredibly low speed compared to other forms of Internet access. A couple of decades ago, before web sites became so graphics intensive and software programs become chatty and need multi megabyte patches, dialup worked well. Today it is excruciating slow.
The US Bell System developed T-1 digital carrier during the early 60’s to reduce interoffice transmission cost. Prior to T-1 analog frequency division multiplexing (FDM) was used to carry voice traffic between telephone switching centers. FDM carrier used a 4-wire circuit to carry 24 voice channels, one pair in each direction. T-1 was designed to also carry 24 voice channels, facilitating transition from FDM to TDM. E-1 digital carrier, used in Europe, is similar transporting 30 voice channels. Each voice channel is digitized resulting in a 64 kbps data rate. 24 channels require 1.536 Mbps plus an 8 kbps control channel resulting in data rate of 1.544 Mbps (E1 is 2.048 Mbps). T-1 has a DS-1 channel speed of 1.544 Mbps and is carried over a 4-wire copper facility. Popular usage has corrupted this distinction. T-1 is now commonly used to mean any 1.544 Mbps service.
In the early 1980’s T-1 was tariffed and made available to customers. T-1 continues to be popular in commercial service carrying both voice and data. T-1 pricing has dropped dramatically over the years as technology improves and the result of competitive pressure from alternative broadband services.
Voice grade phone service occupies the frequency band of 300-3000 Hz. Low frequencies are suppressed to minimize interference from 50/60 Hz power lines. Increasing upper frequency beyond 3000 Hz does little to improve intelligibility, at the expense of greater bandwidth. Digital sampling must be performed at least twice the highest frequency of interest to recover the original analog signal. Engineers chose a sample rate of 8,000 times a second. It was found sampling to 12-bits, resulting in 4096 possible values, produced excellent voice quality. This required 96 kbps per channel resulting in a composite data rate that exceeded what 1960s technology could deliver. To reduce data rate engineers decided to use only 8-bits or 256 values per sample, resulting in a 64 kbps data stream. To minimize quality degradation, conversion is performed logarithmically. When sound level is low samples are close together. During loud passages samples are farther apart. This masks quantizing noise generated by the conversion process. Two slightly different methods are used, µ-law in US and A-law in Europe. The resulting digital signal is called Pulse Code Modulation (PCM). 24 phone calls in US (T-1) or 30 Europe (E-1) are interleaved using Time Division Multiplexing (TDM) combined with an 8 kbps signaling channel the composite data stream is 1.544 Mbps (US) or 2.048 Mbps Europe.
PCM coding scheme developed for T-1 is what makes V.90 and V.92 dialup modems possible and also the reason dialup is limited to 56 kbps. Logarithmic sampling minimizes effect of audible noise but only allows 7 of the 8 bits be used for data. 8,000 samples per second times 7-bits per sample results in maximum data rate of 56,000 bits per second. Dialup modems have reached their theoretical limit.
When used for Internet access voice channelization is neither required nor desired. T-1 data circuits are unchannelized exposing total channel capacity to the IP layer. IP, rather than T-1, performs multiplexing. Some circuits are provisioned to allow flexible control of channelization. This allows an Integrated Access Device (IAD) to dynamically allocate bandwidth between voice and data.
The original implementation of T-1 required regenerators spaced every 6,000 feet. Regenerators recreate bipolar signals, allowing T-1 to deliver very low error rates compared to analog carrier. Regenerators can be powered from the T-1 line, called a span, eliminating need for local power. T-1 bipolar signaling is relatively noisy. This requires care during circuit provisioning to prevent interference between T-1 and other services, including other T-1s and DSL in the same cable.
Early T1 required a 4-wire circuit, 1-pair in each direction. Newer T1 deployments using HDSL2 only need a single pair. Digital signal processing techniques similar to that used with DSL reduce outside plant cable requirement and increases distance between regenerators.
Channel Service Unit (CSU) is connected directly to the 4-wire facility. The CSU regenerates T-1 bipolar signals before handing them off to Data Service Unit (DSU). The CSU provides keep alive and Loopback testing enabling Telco to monitor line quality.
T-1 uses bipolar plus and minus 3-volt pulses, between pulses line voltage returns to zero. The Digital Service Unit (DSU) converts bipolar signals to a synchronous interface such as V.35 using both RS232 single ended and RS422 differential signaling to connect to customer equipment.
When T-1 was developed the interface between CSU and DSU, called DSX-1, was designated the demarcation point between Telco and customer. DSX-1 is still the demarcation point in rest of the world. During US deregulation the FCC defined the 4-wire facility as the demarcation point. This caused problems for service providers as now management and quality assurance functions were no longer under their control but provided by customer premise equipment (CPE).
The solution was the Smartjack. It presents a 4-wire (2-pair) interface to customer and implements service provider Loopback test function. This allows Telco to perform testing and maintenance functions while complying with FCC regulations.
The service provider will typically install the Smartjack within a few hundred feet of where drop cable enters the building. Customer needs to purchase a router and install it. The cable between CPE and Smartjack is a regular Category rated patch cable.
Modern wired telephone network is almost entirely digital except for the 2-wire analog POTS customer loop. With digital technology multiple voice channels can easily be carried over a single circuit. The digital carrier hierarchy is based on voice channels. The lowest level, called Digital Service 0 (DS-0), is a single PCM digitized voice circuit of 64 kbps. Next is DS-1 (24 voice circuits over T-1 carrier) operating at 1.544 Mbps, then DS-2 (T-2) operating at 6.312 Mbps equivalent to 4 T-1 circuits. Then DS-3 (T-3) at 44.736 Mbps equivalent to 28 T-1 circuits.
Higher speed is optical using Synchronous Optical Network (SONET) and ITU Synchronous Digital Hierarchy (SDH). Optical Carrier 1 (OC-1) and Synchronous Transport Signal Level 1 (STS-1) operate at 51.84 Mbps, next is STS-3 (OC-3) 155.52 Mbps, then STS-12 (OC-12) operating at 622.08 Mbps and so forth. Beginning with STS3 hierarchy increases by a factor of four at each step. 10G bps STS-192 (OC-192) is an interesting convergence point. It is the first time Ethernet and SONET/STS operate at the same speed opening the door for Ethernet being carried directly over SONET.
Tremendous success of T1/E1 prompted the Telephone industry to look for a way to deliver high-speed digital service directly to customer. Integrated Service Digital Network (ISDN) was supposed to be the next big thing poised to revolutionize the telephone industry. Alas things have not played out that way. Deployment missteps and high cost have slowed deployment. ISDN is viable where other forms of high-speed access are not unavailable but its window of opportunity has long since passed.
Basic rate ISDN provides two 64 kbps bearer channels (B channels), and a 16 kbps data control channel (D channel) over a single voice grade copper loop. Primary Rate ISDN is basically a T-1 connection. ISDN is a circuit switched technology with very fast call setup time. Being digital the full 64 kbps is available. By current standards ISDN broadband is extremely slow.
ISDN is a circuit switched technology. To access the Internet a phone call is made to the ISP, just as with dialup. Once connected access speed is 64 kbps due to the end-to-end digital nature of the connection. If the ISP offers multilink the second channel can be bonded to create a 128 kbps link. Extra channel can be automatically torn down and set up as needed to free up capacity for voice call.
Cost Tip – ISDN is a switched service. Make sure ISP has access numbers, Points of Presence (POP); close enough so calls are flat rate. Failure to do so will result in a rude surprise when the phone bill arrives. Telco’s offer different types of ISDN service for Internet access unmetered is ideal.
The provider will bring up and terminate the ISDN line at the premise NID, just like a typical POTS line. Customer is responsible for inside wiring and terminal equipment. There is very little new ISDN equipment being produced, so the customer will likely have to purchase it used. Once voice access is working need to connect RS232 serial cable to computer. PCs used to include one or two RS232 serial ports. Serial ports are considered legacy and are often eliminated from new product. If that is the case there are many USB to serial adapters on the market. Once connected will need to configure DUN with account credentials and log into the ISP.
ISDN digital subscriber line (IDSL), uses ISDN signaling to deliver 144 kbps data only service at greater distance than typical DSL. IDSL does not make any use of the circuit switched telephone network, just the loop between the Central Office and customer, much like ADSL.
Digital Subscriber Line (DSL) technology utilizes telephone copper wiring between subscriber and phone company central office (CO) or Remote Terminal (RT) to deliver high-speed data. This allows local exchange carrier (LEC) to generate additional revenue by leveraging its massive investment in copper outside plant cabling. Several types of DSL have been developed hence the xDSL moniker. The most common types are Asymmetric DSL (ADSL) G992.1, ADSL2 (G.992.3), ADSL2+ (G.9925) and Symmetric DSL (SDSL). Telco’s like DSL not only as another revenue source but because it gets long duration data calls off the Public Switched Telephone Network (PSTN). This minimizes need for expensive upgrades to circuit switched phone network.
ADSL was initially developed for video on demand and has been repurposed for Internet access with higher download speed, toward the subscriber, than upload. It uses frequencies above those used with Plain Old Telephone Service (POTS) allowing it to coexist with voice service. This minimizes cost by allowing a single copper pair to be used for both voice and data. Typical ADSL speed is 768 – 7,000 kbps downstream (toward customer) and 128 - 800 kbps upstream (toward Internet). ADSL2 increases that to 12 down and 1 Mbps up. ADSL2+ doubles maximum download rate over short loops.
The Digital Subscriber Line Access Multiplexer (DSLAM) at the Telephone Central Office or Remote Terminal is connected to the customer’s phone line. The voice portion is passed through a low pass filter and delivered to POTS phone switch. DSLAM recovers customer data and uses Asynchronous Transfer Mode (ATM) to link customer to ISP. Telco’s use ATM because it facilitates support of third party ISPs. At the customer location a similar filter is used to separate DSL from POTS. This can be a single whole house POTS/DSL splitter or filter connected ahead of each non-DSL device.
Maximum DSL speed is a function of line length, wire gauge and line quality. ADSL service is limited to about 18,000 feet, with closer customers able to obtain higher speed. A variant of ADSL2 called Reach Extended adds a couple thousand feed at low speed. Remote DSLAMs, called Remote Terminals (RT), shorten loop distance by moving the DSLAM closer to the customer. This increases number of potential customers within range and shorter loop increases maximum speed.
FCC regulations require Incumbent Local Exchange Carrier (ILEC) allow third party data local exchange carriers (CLEC) access to copper access network. Copper subscriber loop is tariffed as an unbundled network element (UNE). DLECs rent collocation space within the central office and install their own DSLAMs and backhaul facilities. Even ILECs need to set up a separate entity to deliver DSL because unlike phone service data is not a regulated service.
DSL can also be configured as a wholesale service. ISP enters into an agreement with a DLEC. The DLEC in turn rents copper circuit from the ILEC, installs DSLAM within the central office and transports customer traffic to the ISP’s point of presence. This is why Telco’s like using ATM to deliver DSL It allows them to offer wholesale DSL service. As such they evolved a complex interconnect scheme consisting of 1) physical copper loop, 2) Asynchronous Transfer Mode (ATM) virtual circuits to transport customer packets over the First-Mile network to the respective ISP 3) typical ISP functions. DSL may involve three different companies, ILEC supplying copper service, DLEC first mile transport, and ISP doing the rest.
Wholesale DSL service has not worked out well; most of the players have left the market. Colocation is still viable for copper. The FCC has not deemed fiber an unbundled network element so DLECs are often not able to rent dark fiber, only copper. MegaPath, formally Covad, is probably the best known DLEC. In NH FirstLight Fiber in NY acquired the CLEC G4 Communication and continues to offer DSL and voice service.
VDSL2 is a high speed variant of ADSL using the same Discrete MultiTone (DMT) modulation scheme to transmit data in the 100 Mbps range over short loops. It accomplishes this by using many more tones resulting in a much higher upper frequency. VDSL2 is the preferred method to deliver fiber to the curb (FTTC) and is often used to convert fiber for distribution in multitenant buildings.
ADSL and Voice telephone share a single copper circuit. At each end filters prevent DSL from interfering with voice phone service. To reduce cost ADSL service providers include inline filters as part of a customer self-install kit. Customer is instructed to installed filter at each non-DSL device. Having customer self-install filters eliminates expense of a truck roll.
An alternative to per device filtering is a whole house POTS/DSL splitter. Splitter provides low pass filter isolating voice from high frequency DSL tones. Splitter has two outputs; “Data” connected directly to DSL modem and “Voice” connected to inside phone wiring. Some splitters contain a half-ringer test circuit after the low pass POTS filter. This allows removal of half-ringer in NID, minimizing DSL signal loading. Splitters do a better job isolating DSL from voice then inline filters. Where speed is high or signal is marginal a splitter will improve margin.
DSL lines are susceptible to noise bursts from many sources such as: lightning, ignition noise, radio transmissions and power line faults. DSL spec writers were aware of this and included forward error correction (FEC). FEC adds redundant check bits to data stream. If noise corrupts data these extra bits are used to recover the data. As long as only a few bits are damaged receiver is able to correct errors avoiding need for retransmission.
If noise burst is long it corrupts too many bits for receiver to undo damage. In that case bad packet is passed to higher layer protocol. TCP/IP TCP requests retransmission. Needless to say that takes a "long" time. UDP/IP, used with VoIP and streaming data does not have a retransmission scheme. There is not enough time to retransmit data before it is needed by application. Streaming applications have provisions to fake missing data. How bad missing data affects quality depends on the application and how extensive the damaged.
When interleave is turned on bits from several frames are interleaved in time. If noise burst is long relative to bit time it corrupts many bits. When receiver deinterleaves data corrupt bits are now spread across multiple frames - increasing chance FEC is able to correct them. This eliminates need for retransmission or application having to fake missing data.
As speed increases number of bits affected by a given noise burst increases. Let’s say a noise burst corrupts a single bit at 768 kbps. At 1500 two bits and at 3000 the same pulse affects four. In addition as transmission speed increases signal to noise margin decreases making transmission more susceptible to noise corruption.
Downside of Interleave is slightly higher latency because multiple frames are processed as a single entity. The penalty for Interleave goes down as speed goes up since a given frame takes less time to transmit at higher speed. Unless you are an avid gamer interested in absolute lowest possible ping time Interleave is transparent. Other network effects swamp out the slight increase (10-20 ms) in first hop ping. Telco’s did not implement Interleave to annoy gamers; they did it to improve overall customer satisfaction.
The ADSL2 specification allows multiple phone lines to be bonded together to obtain higher speed. This is accomplished through an ATM inverse multiplexing protocol. In some instances this may be a cost effective strategy to increase speed.
The most common type of ADSL shares the same physical circuit as telephone. The cost of the line is charged to the telephone DSL in effect rides for free. It is possible to get DSL as a standalone called dry loop. In that case the ISP will pass along an addition charge to cover the cost of renting the circuit. In some cases this is not much less than actually having phone service.
Cost Tip – rather than go dry loop see how much it costs to get some sort of lifeline POTS service. You may need to ask as this may not be a published rate. Cost may not be much more than dry loop, and you have the benefit of landline to call 911 if needed and having dial tone make it somewhat easier to troubleshoot DSL and minimizes the risk a careless craftsperson will reassign the loop to someone else assuming it is idea.
DSL uses 100-year old copper telephone network to carry high-speed data. This is an impressive engineering accomplishment. Unfortunately not all phone lines are suitable for DSL. Assuming the local central office (CO) or remote terminal (RT) is equipped for DSL it may not be available for a number of reasons. This section discusses common problems and where applicable workarounds.
In the bad old days before US telecom divestiture (1880’s to early 1980’s) Phone Company supplied service, owned customer premise equipment (CPE) and leased it to customer. Customer was prohibited from connecting anything to the telephone network. With divestiture Phone Company regulated responsibility was limited to delivering service to premise. Inside wiring and CPE became the customer’s responsibility.
This created a dilemma for the Phone Company, how to determine if a problem was their responsibility or that of the customer? Enter the Network Interface Device (NID). NID is the demarcation point, between Phone Company and customer. It incorporates lightning protection and a method to easily disconnect customer premise equipment (CPE) from the telephone network. Early NIDs used modular jack connected to old style carbon block lightning protector. Over time NIDs evolved into an integrated package and gas tube surge protection replaced carbon block. Gas tube protection is preferred because module is hermetically sealed, provides more consistent over voltage protection and has lower shunt capacitance then carbon block. Carbon protectors have to tendency to increase circuit noise over time. This may cause problems if DSL signal is weak.
Phone Company uses automatic test equipment called mechanized loop test (MLT) to periodically test copper phone lines. They wanted a device; built into the NID, which allowed MLT to determine where the network ended and where customer responsibility began. There have been two different approaches to this: MTU and Half-Ringer.
MTU was developed during the early days of deregulation. It is a pretty clever device; it consists of a series pass voltage controlled switch on each leg of the Plain Old Telephone Service (POTS) circuit. During normal phone usage switch conducts and POTS equipment operate normally. Testing, done at a lower voltage, does not trigger the series element thus isolating CPE side from the telephone network. MTU being a series pass device has four leads, two connect to Telco side the other two to CPE.
Unfortunately MTU’s are incompatible with DSL. DSL modem does not draw current from phone line so MTU never turns on. The MTU isolates DSL modem from the telephone network. If your line has an MTU it must be removed. For reasons too involved to go into here MTU’s caused other problems and have not been used for years. If your phone line had an MTU it was most likely removed years ago, but it is possible some were missed. Automated testing should flag the existence of an MTU, but not always.
Half-Ringer is a simple circuit that emulates old style electromechanical Western Electric ringer providing a test signature for automatic test equipment. It consists of a capacitor, resistor, and back-to-back Zener diodes. ADSL is designed to operate in the presence of a half ringer so in most cases it will have no effect on ADSL. It does represent a small load so if signal is marginal removing it may help. SDSL is not able to operate in the presence of a Half-Ringer and it must be removed.
It has been standard practice in many areas of the United States to install, at the Network Interface Device (NID), a network termination device called a half ringer. It is an example of an AC termination device since it is detected using AC techniques.
A normal POTS mechanical ringer is made up of an inductor and capacitor in series bridged between Tip and Ring. The half ringer is a capacitor in series with a Zener diode and resistor. This, in the U.S., is a 0.47 micro Farad capacitor without the addition of the inductive part of the circuit, hence the name ‘half’ ringer.
ADSL service is limited to about 18,000 feet. Some ILECs are installing Remote Terminals (RT) to reduce cable distance allowing them to serve more customers at higher speed. ADSL2 and Reach Extended ADSL2 have slightly extended maximum distance. Obtaining accurate pre order distance estimate can be a difficult. The effective wire distance between DSLAM and customer is often substantially longer than driving distance making map based estimates questionable.
A back of the envelop method to calculated wire distance is to multiple downstream attenuation by 250 to obtain distance in feet. The graph below shows typical downstream speed vs length. The portion at 80-90 dB is extra distance obtained by virtue of ADSL2 Reach Extended option. By way of example I am a DSL customer, my attenuation is 46dB and modem syncs at 7Mbps +/-400kbps with 6-10 dB margin.
When telephone feeder cable is installed it is not known how many customer circuits will be needed at each location. The solution is to run a large feeder cable past many customers. As service is installed the technician selects an unused cable pair and splices it to the drop cable. The circuit feeding the drop may continue for hundreds or thousands of feet beyond the customer, resulting in a bridged tap. Bridged taps are of no consequence for telephone service but can degrade DSL. The presence of a bridged tap causes DSL signal to split at the tap going down both paths. When it reaches the end it is reflected back into the line, creating interference. DSL is designed to tolerate some amount of bridged tap, but if circuit is marginal it may cause problems or push customer over distance limit. SDSL providers typically pay Telco to remove bridged taps. This is expensive and is not ordinarily done for low cost residential ADSL.
Resistance and impedance attenuate signals. This effect is more pronounced at high frequencies and long circuits. Load coils are used on long loops to cancel these harmful effects resulting in better voice characteristics. Load coils are typically installed on loops over 18,000 feet. H88 load coils, the most common type, are spaced every 6,000 feet beginning 3,000 feet from the central office.
Unfortunately load coils are incompatible with DSL. They flatten response over the voice frequency range but severely attenuate high frequencies used by DSL. If Load coils are present they must be removed.
Digital Loop Carrier (DLC), Digital Added Main Line (DAML) et al are techniques to allow multiple telephone customers to share a single copper circuit. Phone companies use loop carrier when there are no available circuits and in rural areas where cost of active electronics is less than running a dedicated circuit to customer. Unfortunately most forms of DLC are incompatible with DSL.
Telco feeder cable carries many different services: POTS, ISDN, DSL and T-1. Phone circuits often closely parallel power lines picking up power line noise. Imperfections cause unintentional coupling from one circuit to another. This raises the noise floor. If noise becomes excessive speed is impacted.
Residential DSL is not typically warranted for speed, it is a best effort service. If noise is present during phone call customer is more likely to get problem resolved than if it only affects DSL or dialup.
Much advertising ink has been spilt stating DSL is not shared. While that is certainly true all Internet connections are shared at some point, the issue is where and will it affect service. With DSL the chokepoint is backhaul from the DSLAM, especially in the case of remote terminals. If backhaul becomes congested all user of that DSLAM will suffer.
Copper phone circuit is able to carry not only telephone and Internet but dangerous voltages. This may occur due to nearby lighting strikes or accidental connection between power line and phone. One of the functions of the NID is to protect people and equipment from hazardous voltages. Older installation use carbon block protector while newer use hermetically sealed gas tube. Both have the same function, when exposed to excessive voltage they short the phone line to ground.
Assuming you already have a landline most residential ADSL is self-install, the ISP sends out an ADSL router and several inline filters. Filters need to be connected to each non-DSL device and the modem located in a convenient place. Connect modem to PC with an Ethernet cable and plug modem into phone jack. Most ISPs use PPPoE that requires customer enters a username and password. In many cases the router has a walled–garden mode until it is configured that walks the customer through the required steps.
VDSL2 is optimized for very high-speed service over short telephone loops. Think of VDSL as ADSL on steroids. It uses the same DMT modulation as ADSL but many more tones extending the upper frequency range to 10s of MHz. The sweet spot for VDSL2 is 50 Mbps @3,000ft. To deploy VDSL carriers are building fiber to the curb (FTTC) networks. Video ready access device (VRAD) cabinets are deployed in the field near customer and linked to telephone central office via fiber. This allows the carrier to deliver triple play converged services: data, voice, and TV while leveraging existing copper infrastructure.
VSDL2 is fast enough to deliver limited television service while at the same time providing high-speed Internet. Standard definition TV (SDTV) requires 2-3 Mbps per program while high definition (HDTV) is about 15 Mbps using MPEG2 compression. MPEG4 is more effective reducing rate by about half to 1.5/8 Mbps respectively.
VRADs are controversial because they are rather large and targets for graffiti. In many urban areas there is significant customer resistance. VRADs are typically located near existing cross connect boxes to gain access to customer copper circuit. VRADs require both AC and backup power.
ATT&T has been the most aggressive US Company rolling out VRADs and VDSL2 with mixed results. They are finding speed is often less than satisfactory and there is a huge problem because customer speed is so dependent on distance and line quality.
Because VDSL2 uses such high frequency crosstalk is a problem. Copper customer loop uses twisted pair but the twist is fairly loose unlike the tight twist of Category 5 or 6 cables. The loose twist means circuit is less able to reject high frequency noise. Phone cable often consists of hundreds of pairs. Noise from one circuit is coupled into the others degrading performance. If VDSL2 crosstalk can be eliminated much higher and more consistent speed can be delivered over the same length circuit.
Vectoring is a crosstalk reduction strategy that monitors multiple circuits and computes noise effects for each line and then modifies transmission to reduce the effect. When Vectoring is applied to all pairs in the cable 100Mbps at 1500 feet and 50Mbs at 3000 feet become practical.
For vectoring to be successful all VDSL circuits must be under the same management – this poses a problem where VDSL is used by both the incumbent and completive carriers. Due to the much lower frequencies used by ADSL and ADSL2+ crosstalk from those signals does not have much effect on VDSL2.
Bonding is another way to increase end user speed but is relatively expensive as it requires two customer loops, and two interfaces at both the DSLAM and customer modem. A bonded connection looks like a single higher speed link to Internet traffic.
An interesting take on bonding is rather than treating a 2-pair circuit as two separate channels treat it is three channels referenced to a common ground. Making this work requires substantially signal processing. At this point this is at the research stage and is not being implemented by carriers.
Cable TV (CATV) industry started in the 1950’s as Community Access TV in areas where roof top antennas did not provide adequate reception. Early pioneers found they could locate a large antenna on a local mountaintop and distribute broadcast TV over coaxial cable. By the 1990’s the industry was looking for new revenue streams and ways to fend off inroads being made by Direct Broadcast Satellite (DBS).
Historically Cable TV has been a one-way medium. TV signals originated at the CATV Head End (HE) and were delivered to subscribers over coaxial cable. To accommodate Internet service the Industry needed to upgrade unidirectional one way “broadcast” cable distribution with a bidirectional system. This involved replacing distribution amplifies with bidirectional amps. Previous upgrades had modified the coaxial network with Hybrid Fiber Coax (HFC). Fiber is deployed deep into the CATV network. Redundant fiber loops interconnect the Head End to hubs. The hubs in turn connect to local nodes that convert fiber to coax. Coax is only used for relatively short distance connecting individual subscribers to HFC network.
Cable Labs, an industry consortium, developed DOCSIS to deliver two-way Internet service over the HFC CATV network. DOCSIS 1 delivers per segment bandwidth of up to 42 Mbps toward customer and 10 Mbps upload. DOCSIS 2 increased upload to about 30 Mbps. DOCSIS 3 increased downstream to 150 Mbps and upstream to 100 Mbps by bonding multiple channels. DOCSIS 3.1 increased speed to 10Gbps down and 1 Gbps up. Note this is the total data rate for a particular node that may consist of a 100 or so customer.
Some early Cable Internet deployments were unidirectional. Cable network was used for downstream transmission and dialup for upstream. This allowed CATV operators to quickly offer high-speed Internet service prior to upgrading cable facility to carry bi-directional data.
CATV is typically thought of as a residential service. CATV industry is actively courting commercial customers. High speed and low cost makes Cable based Internet access an attractive alternative to expensive T-1 service.
The CATV network, much like the phone network, has been pressed into service to deliver high speed Internet connectivity. There are a number of issues that interfere with obtaining maximum possible speed.
Cable is a shared medium. Each user competes with others on the same segment. While all Internet access is shared at some point Cable is shared in the first-mile. As more customers subscribe Cable provider, called Multi System Operator (MSO) must reduce number of subscribers per segment to deliver acceptable service. MSOs are aggressively driving fiber deeper in their outside plant to reduce the number of customers serviced by a node allowing them to offer higher speed.
This is why Cable industry has been so aggressive going after “bandwidth-hogs,” customers that either upload or download a lot. It is not uncommon to have daily/monthly cap on residential cable accounts. If cap is exceeded speed is throttled or service discontinued.
DOCSIS uses a time slot mechanism, called Time Division Multiplexing Access (TDMA), to facilitate equitable upload over the shared cable segment. The Cable industry assumes customers will primarily use download capacity. Customers are taking advantage of Internet peer-to-peer capabilities to create and host their own data, use Voice over IP (VoIP) telephone service, and peer-to-peer file sharing. This creates a strain on limited Cable upload capability.
There are a number of techniques to increase upload performance. Synchronous Code Division Multiple Access (S-CDMA) works better in the lower frequency upstream channels unsuitable for Advanced Time Division Multiple Access (A-TDMA). Another technique, channel bonding, combines multiple upstream channels to increase performance. The latest DOCSIS specification goes a long way to alleviating the upload problem.
Cable systems operate up to about 900 MHz Much of this range is not authorized for over the air TV and is used by other wireless services. This places stringent demands on the cable operator to prevent RF leaking out of the network interfering with other radio user or leaking in interfering with subscribers. Ingress leakage is an especially difficult problem for the low frequencies (below Channel 2) 54 MHz used for DOCSIS upload.
Whitespace broadband uses locally unused TV channels to deliver radio based broadband. The concern is that having a nearby transmitter, even though it is low power, will inject enough noise into the CATV network to degrade performance.
Cable Company endeavors to maintain adequate signal levels to support both DOCSIS Internet access and TV. Common practice is to install a two way splitter where cable enters the residence. One drop is connected directly to the DOCSIS modem, the other to TV. If multiple TVs are used a bidirectional amplifier may be needed to make up for signal loss through the splitter. DOCSIS modem should always be connected to the two-way splitter rather than behind an amplifier.
Coax cable, being an electrical conductor, may carry stray currents into the building. NEC requires the shield to be bonded to building ground system to minimize potential shock hazard. Excessive noise or AC hum can degrade both TV and Internet access.
If Cable has never been installed the MSO will need to install coaxial drop, ground coaxial cable shield where it enters the building and install a splitter. DOCSIS modem goes to one output of the splitter the other is used to connect one or more TVs. Depending on signal levels and number of TVs the installer may need to install a distribution amplifier.
In the quest for high speed Internet what about electric utilities? One would think they are well positioned to take advantage of this growing demand. At a seminar I attended many years ago the speaker commented that electric companies are ideally positioned to be broadband providers because they have: 1) Rights of way, 2) Guys with trucks, and 3) know how to send out a bill every month. In short they appear to be well positioned to roll out high speed Internet access. Alas except for a few isolated cases with municipal power utilities that have rolled out fiber optic networks this has not happened.
Broadband over Power Lines (BPL) in the context of this paper refers to delivering high speed Internet access over the long distance electrical transmission grid. It is distinct from the HomePlug Alliance specification using electrical wiring to create an Ethernet LAN.
The same technique that shoehorns megabit DSL onto phone lines can be used to send bits over power lines. Electric utilities have experience with this technology. They use a much slower version to transport telemetry SCADA data from remote substations. If successful it represents a low cost way to deliver broadband without the need to string expensive fiber optic cables.
The reality is the power transmission network has turned out not to be well suited to high speed Internet access so interest has waned. Where this technology may have a future is in narrowband communication in support of the Smart Grid. Smart meters need a way to communicate with the mothership and some flavor of BPL will likely be used.
The holy grail of broadband is fiber optic service all the way to the customer, called Fiber-to-the-Premise (FTTP). A fiber network costs $1000 to $2500 per home passed. To put that into perspective it is about twice the cost of copper phone line and three times that of Cable. Service providers are faced with the difficult business decision of choosing to invest in technology to extend life of existing copper network or take the plunge and install fiber. Deploying fiber puts the company in a very strong long term competitive position but demands tremendous up front capital investment. Triple-play: voice, data and video takes advantage of fiber’s tremendous bandwidth to deliver multiple services over a single network.
Fiber is ideal for Greenfield locations. Installing fiber in a new location is cheaper than the combined cost of deploying traditional Copper POTS phone lines and HFC Cable network and results on much lower maintenance cost.
High cost of deploying FTTP is an impediment to adoption. Companies are working hard to reduce both labor and component cost. As more systems are installed cost is falling rapidly. These efforts range from use of fiber optic ribbon cable and preterminated cable assemblies to installing fiber in sewer mains or abandoned water and gas pipes. Advances in fiber optic cable manufacture reduce the effect of tight bends on optical loss. Bend insensitive cable is ideal for multitenant buildings.
FTTP is able to emulate analog plain old telephone service (POTS) by reserving channel capacity and digitally encoding voice. From a subscribers perspective service is identical to legacy POTS. This is not Voice over IP; this is being done at the physical layer. It is also possible to implement phone service as Voice over IP (VoIP).
TV program can be delivered by emulating the methods used by Cable providers. A third “color” lambda is used to transmit programs from head end to customer. At the ONT the optical signal is converted to RF and delivered over existing coaxial cable to set-top-boxes like those used for Cable.
Optical networks are well suited for IPTV Video on Demand (VoD). VoD requires tremendous network capacity, especially HDTV. Each HDTV program requires about 15 Mbps; SDTV 2 Mbps. FTTP has enough capacity to easily deliver individual HDTV programs to a family of four with enough capacity left over for Internet access.
With Switched Ethernet a customer is directly connected to a port on an Ethernet edge switch, typically located in a remote enclosure relatively close to customer. The advantage of this approach is costly electro/optical interfaces only need operate at link rate, typically 100 Mbps or 1 Gig rather than the aggregate PON rate. Customer premise equipment is cheaper since it only has to convert a point-to-point optical interface to Ethernet.
Switched Ethernet simplifies provisioning. Once a customer is connected, increasing or decreasing access speed can be performed by reconfiguring the edge switch. Whereas with PON customer provisioning may require sending a craftsperson out to physically modify split ratios. Privacy is very good, as only traffic destined for the customer is visible at customer’s drop. Down side is requirement for remote equipment huts to house Ethernet switches and need for backup power during emergencies.
Passive Optical Network (PON) uses a single optical fiber to deliver services to 32 or more customers. Traffic toward customer is broadcast to all endpoints. Upstream traffic utilizes a time division-multiplexing (TDM) scheme to insure access fairness. Traffic in each direction is carried by a different color, called Lambdas. Wavelength Division Multiplexing (WDM) is the optical equivalent of Frequency Division Multiplexing (FDM) used at lower frequencies. WDM allows a single fiber to carry multiple channels in each direction without interfering with one another. Convergence points contain passive optical splitters to connect multiple customers to a single trunk fiber. PONs advantage is it does not require active electronics in the field and a single fiber is able to serve many customers.
In addition to Internet access PON is able to deliver TV by emulating the legacy CATV hybrid fiber coax (HFC) network. This is accomplished by using a third Lambda to carry CATV information. At customer location an optical/electro converter translates PON optical signal to traditional CATV coax electrical interface, much the same as a CATV node. This preserves backward compatibility with legacy CATV network.
Both ITU and IEEE Ethernet in the first mile have developed PON specifications. Download speed ranges from a low of 622 Mbps for the first generation ITU standard to 10Gps in both ITU and IEEE versions.
ATM PON uses ATM to provide data and voice virtual circuits over single fiber. APON specification delivers aggregate bandwidth 622 Mbps down and 155 Mbps up. Maximum fiber distance is 20 km (65 kft).Broadband PON (B-PON) uses a third optical wavelength to emulate legacy CATV network for triple play service. ATM is used for transport reducing effective IP payload by about 10% due to ATM overhead. One also needs to factor in AAL2 POTS voice channels at 64 kbps each. Assuming a 1:32 split ratio B-PON delivers about 18 Mbps down and 4.5 up to each customer.
1550 nm is used to emulate CATV Hybrid Fiber Coax (HFC) network. In the US TV channels are 6 MHz wide. Each channel can be used to for a single analog SDTV channel or up to 42 Mbps of data. This enabled a single channel to carry multiple SDTV or HDTV programs.
1310 nm is used to transmit data from customer to ONL. Upstream traffic is based on a time division-multiplexing scheme to insure fairness. Unused slots are reclaimed and are available to other customers.
The system being deployed by Verizon FIOS includes 4 emulated POTS channels. This is not Voice over IP. POTS channels are carried over ATM, making them invisible to Internet traffic. Voice quality is identical to regular POTS, typically better due to short length of copper circuit.
A single fiber connects ONT to the PON network. Verizon is making heavy use of preterminated fiber to reduce installation cost. Fiber and UPS wiring connect to left hand Telco side of the ONT. The right hand customer side has four analog POTS interfaces, an F connector for TV and a RJ45 Ethernet connector for data.
During power failure Internet and TV portions are shut down after a few minutes to conserve battery life. The uninterruptible power supply (UPS) keeps voice service active for about 12 hours when idle and about 4 –5 “talk” hours.
GPON ITU-T G984.1 and G.984.2 increases speed to 2.5 Gbps down and 1.25 Gbps up. GPON does away with ATM eliminating so-called ATM cell tax. Higher speed of GPON makes it better suited to IP based Video on demand (VoIP) then first generation BPON and has replaced BPON for new installs. Three lambdas are used as with B-PON, data down, data up, and CATV emulation.
IEEE Ethernet in the First Mile working group developed an Ethernet version of PON that does away with ATM and delivers Gigabit (1.25 Gbps) speed. E-PON is faster than B-PON (622/155 Mbps) but not as fast as new ITU GPON (2.4/1.2 Gbps).
Second generation Ethernet PON increases speed to 10/1Gbps with an option for symmetrical 10/10 Gbps. The down side of symmetric 10G-EPON is the need for expensive optical transmitters in customer ONT.
FTTP cable TV emulation is unidirectional – toward customer. This creates a problem for smart set-top-boxes that need to communicate with the head-end. Initially set-top-box required both coax and Ethernet connection. While that is conceptually easy it is labor intensive to install new cable as most home do not have Ethernet drop near TV.
Multimedia over Coax Alliance (MoCA) technology utilizes RG6 coaxial TV wiring for data. This eliminates need to run Category rated twisted pair cable to each set-top-box. MoCA can also be used to transport IP based video on demand (VoD). MoCA uses frequencies above those needed by Cable TV to create a LAN. In an analogous fashion as to what DSL does with voice telephony. Once MoCA is installed the same coax used to deliver conventional TV broadcast programs also provides Internet access.
FTTP represents a complete rethinking of how wired communication service is delivered. Building a FTTP network is a major construction project involving installation of fiber cabling, termination facilities and customer premise equipment.
There have been numerous horror stories about damage to other utilities and homeowner property during installation of FTTP. There have also been problems where CPE was installed in violation of National Electrical Code (NEC) requirements.
Legacy analog POTS phone network is powered by telephone switching office. During power outages batteries and diesel generators maintain system power indefinitely. It is not feasible to deliver power over an optical network. Customer’s terminal equipment is battery backed so during power outage it continues to operate. Backup time is a function of battery size. Larger the battery the longer service stays operational during a power failure. Batteries are relatively short lived components and need to be replaced every few years at customer’s expense. Even at modest power consumption rate it takes a large battery to provide multiday power.
In the US FCC regulations require Incumbent Local Exchange Carriers (ILEC) to share certain copper unbundled network elements (UNE) with third party service providers. That regulation does not apply to fiber.
Once a locality is wired with FTTP it makes little economic sense for additional competitors to enter the market. First-mile is the most expensive and least profitable portion of the global telecommunication network. This natural monopoly, of the Internet on-ramp, creates vexing government policy questions. How does one balance the need for universal access with the massive capital outlays needed to deploy fiber?
Some municipalities frustrated by the slow roll out of high-speed service are installing their own fiber and renting it to third party service providers or delivering data, video and phone service (triple play) themselves. This is a hotly debated topic. Should municipalities build their own fiber network or is this is best left to private enterprise?
Assuming this is a new install provider will have to install a new fiber optic drop either aerial or underground. If it is underground direct burial cable is plowed into the lawn. The main part of the ONT is typically mounted on the outside of the building. The ONT needs to be grounded to the building ground system. Power Supply and battery are located inside near an unswitched outlet. If POTS phones are to remain active they are disconnected from the old copper NID and reconnected to the ONT. An Ethernet cable delivers Internet access and a router, typically supplied by the ISP, is used to share the connection. If TV service is purchased coaxial cable drops are run to each location for the set top box and in most cases MoCA is used to connect the set top box to the service provider for billing and video on demand.
In areas not served by wired high-speed Internet Wireless ISPs (WISP) are rushing to fill the void. WISPs use radios that operate in both licensed and spectrum. Point-to-Point service may also be implemented using free range optical links that do not have to be licensed but must meet safety standards as pertains to eye damage due to the use of high power lasers.
The first question that comes to mind is what is the difference between Cellular and WISP? Cellular began life as a voice centric mobile service and added data as an enhancement. Today the bulk of mobile cellular traffic is data due to the popularity of smart phones. Cellular service is optimized for use while is in motion; WISPs are optimized for fixed location.
WISPs use Point-to-Point and Point-to-Multipoint distribution. In some cases customer’s equipment functions as a router creating a mesh network expanding service footprint. In a PtP network a dedicated link is created between two locations. In a PtMP network a central hub services multiple customers.
Wireless ISPs use a central radio to cover a large territory eliminating need to run cable all the way to the customer’s location. Radio technology is ideal for rural areas where low population density makes installing copper or fiber uneconomic. As picture shows signal may take a direct path or if obstructions exist ISP may deploy repeaters. Repeater acts as a router forwarding packets and extending coverage area. Directional antennas can be used to create multiple sectors increasing total bandwidth.
World Interoperability for Microwave Access (WiMAX) is a trade association promoting this evolving standard and hopes make it as successful in the metropolitan area network (MAN) space as Wi-Fi has become for local area networks (LAN). Distance is about 10 miles in Non Line of Sight (NLOS) and 30 miles over line of sight (LOS). Maximum data rate is about 30 Mbps. As with other wireless technologies speed and distance are inversely related, the greater the distance the lower the speed.
WiMAX is based on IEEE 802.16 specification for wireless metropolitan area networks (WMAN). 802.16 specify operation between 2 - 66MHz. It is up to the WISP to choose the most appropriate frequency band and obtain any required licenses. There was early interest in using WiMAX as a cellular data standard but LTE has won that battle. WiMAX is being primarily deployed for fixed wireless.
The tremendous popularity of IEEE 802.11 Wi-Fi Wireless LANs created the phenomena of Wi-Fi hot spots. All sorts of entities from libraries to hotels to airports have installed public Wi-Fi Access creating Wi-Fi hot spots. Customer connecting to the hotspot typically has to go through some type of portal experience. The portal requires the user agree to certain terms and conditions. Once connected user is able to access the Internet. In some cases the service is free in others pay to play.
Wi-Fi was designed as a short-range wireless LAN. Attempts to provide citywide coverage using Wi-Fi Access Points has not been successful. The limited range of Wi-Fi makes it unsuitable as a metropolitan area network (MAN).
White Space refers to unused TV channels. FCC is investigating allowing unoccupied TV channels to be used for low power Internet access. This effort is controversial. Nearby white space transmitter may cause unacceptable levels of interference with consumer AV gear. This will affect both Cable TV subscribers and over the air viewers. In addition it will be very difficult for a white space radio to determine if a particular channel is in use or vacant.
Satellites act as a very tall antenna vastly expanding coverage area. Geosynchronous satellites occupy an extremely high orbit so they appear to be stationary. Low and Medium orbit satellites are much lower and require a large number to cover the globe. The success delivering TV programs using geosynchronous satellites prompted interest in using satellites to deliver Internet.
Science Fiction author Arthur C Clark is generally credited with proposing the notion of geosynchronous satellites in his 1945 paper Extra-Terrestrial Relays. On Earth this distance is 22,236 miles above mean sea level, now called the Clark orbit. Clark’s idea has been a boon to Radio and TV broadcasting.
Orbital time is a function of distance. The further a satellite is from earth the longer the orbit duration. Clark realized that at a certain distance orbital time would equal 24 hours. If the satellite is in equatorial orbit a 24-hour orbit means the satellite appears to stay positioned over the same spot permanently.
The great height of geosynchronous satellite creates continent sized signal footprint for each satellite. Since satellite appears fixed in space expensive antenna tracking mechanisms are not required.
When small aperture Direct Broadcast Satellite (DBS) TV became popular it was natural to adopt this technology to high-speed Internet access. One-way implementation uses satellite link for high-speed download and dialup modem for upload. Two-way service uses satellite link in both directions.
Unfortunately the great height of geosynchronous orbit adds significant latency making this type of service more appropriate for large file transfer than interactive browsing. One-way latency Ground — Sat -- Ground is about ¼ second (250 ms). If the satellite is used in both directions latency is about 500 Ms. When dialup is used for upload total latency is reduced to about 350 ms. that is still too long for effective browsing or telephone service.
To reduce latency satellite must be nearer to Earth. There have been several attempts to use Low Earth Orbit (LEO) satellites to provide Internet communication service but they have not been commercially successful. Covering the globe requires a constellation of hundreds of expensive satellites. The two most famous attempts were Iridium and Teledesic.
MEO satellites occupy the distance between LEO and geosynchronous orbit. O3b Networks is building an equatorial constellation orbiting about 5,000 miles above sea level. When complete the constellation will provide coverage 45 degrees north and south of the equator.
The ISP normally supplies the radio equipment and installs it at the customer’s location. Customer is then able to use a residential router to share the connection. Antenna needs an unobscured view of the satellite. Since Geosynchronous satellites are in equatorial orbit antenna elevation gets depressed the further north you are.
Unlike other wireless networks the cellular network is designed to be used while in motion. Cellular phone service is hugely popular. What started out as an expensive lunchbox sized 2-way radio a couple of decades ago is now smaller than a pack of cigarettes and is considered an essential part of everyday life by much of the population. Some customers, especially younger ones, eschew landline phone altogether in favor of a cell phone. 90% of American adults have a cell phone and more than 60% of these are smart phones with Internet access. Worldwide there are almost 7 billion cellular subscriptions about half of which have Internet access. Some folks have gone so far as to rely solely on their smart phone for Internet access or tether their phone to create a home network rather than pay for wired Internet.
The attraction of wireless connectivity is not limited to voice. Almost from the beginning the cellular network was pressed into data service, typically with less than stellar results. Today Smart Phone usage is driving rapid conversion of the network from one optimized for voice to one optimized for data. The bulk of cellular traffic is now generated by Internet access and streaming content rather than voice. The advent of high speed LTE is causing mobile carriers to rethink their business model. Verizon in particular is pitching LTE as an alternative to DSL in the areas they sold off their wireline business.
It is common to talk about the cellular network generationally 1st – 5th even though there is no hard and fast definition. 1st generation was the original analog circuit switched network in the 1980’s. 2nd generation was digital but still used circuit switching circa 1990’s. Early 2000’s saw the migration to 3rd generation digital spread spectrum optimized for Internet access at 200kbps data rate – the FCC definition of broadband at the time. Currently service providers are migrating to 4th generation delivering 100 Mbps speed. Quite an accomplishment for a device you can hold in your hand while traveling at speed or in the skyscraper canyons of a modem city. Things move fast in the cellular world, no sooner has 4th generated been deployed then the marketing machine has been cranked up talking about 5th generation. It is still very early in the development cycle deployment is not expected until at least the 2020 timeframe. Hallmarks of 5G are: 1 Gbps, reduced latency, increased use if very high frequencies and micro cell sites.
United States is unique compared to the rest of the world where national cellular standards exist. The FCC chose not to mandate a particular standard. As a result we have a confusing patchwork of competing standards but that also allows companies to rapidly bring innovative services to market. Early cellular protocol was analog: Advanced Mobile Phone System (AMPS). The modern cellular network is digital. In the US some carriers use Global System for Mobile Communication (GSM) as does most of the rest of the world while others have adopted Code Division Multiple Access (CDMA2000). With the increased emphasis on mobile Internet that is changing and LTE has become a worldwide standard.
The tremendous popularity of mobile Internet is driving adoption of more spectrally efficient transmission standards to increase speed and the need for more RF bandwidth to increase channel capacity. In the aftermath of the US transition from analog to digital TV in 2009 channels 52-69 were auctioned off. Much of this spectrum will be used to expand the cellular data network. The FCC is auctioning off additional TV channels and has recently made additional microwave bands available for cellular.
Data rate is a complex interaction affected by: channel size, power, interference and modulation. Due to the nature of radio communication throughout is often significantly slower than peak data rate. None the less modem cellular network provides meaningful high speed access pretty reliability.
CDPD is the granddaddy of wireless data service. It used the analog advance mobile phone systems (AMPS) to deliver an anemic 9.6 or 14.4 kbps. Due to the heavy compression used on the cellular network dialup speed is significantly lower than landline PSTN US carriers stopped supporting CDPD in 2005.
Modern Cellular network is digital capable of much faster data transport then early analog system. EvDO rev A delivers download speed in the 3.1 Mbps range. Rev B increases speed to 4.9 Mbps per channel
HSPA+ is an evolution of HSPA that dramatically increases speed while maintaining the same radio interface. HSPA+ delivers up to 168Mbps toward the user and 22 Mbps up. In the US AT&T and T-Mobile have adopted HSPA+
LTE is part of the Third Generation partnership project for GSM networks (3GPP). Focus is migrating cellular data to packet based, rather than circuit switched network and delivering substantially higher speed then today’s cellular network. LTE delivers Multi-megabit speed with very low latency. Data rate is in the 100-300Mbps down 50-75Mbps up.
Long Term Evolution Advanced is the next generation version of LTE that meets all the requirements set forth in 3GPP for 4th generation radio. 1Gps aggregate data rate down, 500 Mbps up with improved spectral efficiency.
Mobile generations come every decade or so. 5th generation is still very much in the early research stage focused on how the network is used and people interact along with significantly faster speed. Micro sites will deliver improved speed in dense urban areas.
Most Cellular data plans have relatively low monthly data usage limit with significant overage charges. The high speed now available to cellular subscribers makes it easy to blow through monthly cap streaming video.
An interesting wrinkle on cellular are companies like Republic Wireless a mobile virtual network operator (MVNO). The smart phone preferentially uses Wi-Fi. Whenever the phone is connected to a Wi-Fi network voice, text and data are transported over Wi-Fi. The cellular network is only used when Wi-Fi is not available.
Tethering is an interesting way to use your cell phone to connect one or more additional devices. Depending on the phone the LAN connection may be USB, Bluetooth or Wi-Fi. If the phone includes router capability you can connect multiple devices, if not will need an external router.
Being mobile there is no installation. Increasingly providers are moving away from subsidizing phones as part of a multiyear contract. In general FCC number portability can be used to transfer your existing wired or wireless phone number to the new carrier.