The FSB, as directed by the last meeting of the G20, is tasked with developing a global standard for the design, administration, allocation, and dissemination of a globally unique Legal Entity Identifier for use in, ultimately, all financial transactions. As businesses and corporations in all industries must conduct some degree of financial transactions in order to manage strategic assets or participate in currency and commodity markets, the impact and scope of the implementation of a global LEI transcends the financial industry, where the proposed use of a new global standard has obvious impact.
The FSB is in a bit of a ‘time-is-of-the-essence’ time frame window, spurred to make definitive progress on establishing a global mechanism for issuing LEIs due in part to (1) the fact that the CFTC has stipulated that swap derivatives dealers that fall under its regulatory auspices are to begin using an LEI this June and (2) the G20 has instructed that the FSB report on the formation of global LEI standards at the next meeting of the G20.
A full report on the numerous issues and alternatives that are occurring on both sides of the Atlantic with regards to converging on the business requirements and technical aspects of a global LEI system is beyond the scope of this update. I have prepared a more focused recommendation on a key aspect of the discussion regarding the use of the LEI code field and the concept of a federated approach versus a central approach to the establishment of LEI Registrars. That recommendation is attached as a PDF to this discussion topic.
I would urge you to review this proposal, as I do believe that it provides a means whereby ‘early adoption’ of putting an LEI process in motion by the CFTC can be harmonized with the FSB’s task of developing a specification and design for the implementation of a global LEI. It is quite important that the confluence of these two developments result in a productive and additive benefit to the financial industry (and global economy for that matter), as opposed to one in which the efforts clash and end up hobbling the benefits of establishing a global LEI while the compliance costs of supporting and participating in a global LEI system would no doubt remain undiminished.
The FSB has indicated that all comments and feedback as input to its deliberations must be submitted by April 16 in order for them to have a final report by the end of the month. I have submitted my attached recommendations to the FSB already, and I look forward to comments or questions from the community on this important subject.
Apple Internet TV ? or Google ?
There is much speculation of the anticipated ‘next move’ of Apple, notably in the area of taking Apple TV to the next level, a la providing of Internet TV via the iTunes framework.
However, I believe that the real opportunity in the next disruptive wave of video, movies, TV, and the Internet is more likely to rest with Google, particularly with their acquisition of Motorola and the doorway to developing a truly Internet/TV integrated set top box.
This is because, in my opinion, the real breakthrough in both the Internet and cable TV programming is going to come when the full bandwidth of the broadband coax cable is used almost entirely for the Internet Protocol (IP) stack.
Currently, the broadband bandwidth of coax is inefficiently allocated and compartmentalized in frequency multiplexed video bandwidth slots for dedicated cable TV programming, with just a couple of such slots reserved for carrying IP traffic.
When the full bandwidth of coax is used to stream on-demand video via IP (and multi-casting so as to avoid duplicate streams for the same programming), the true convergence of the Internet and TV (and search, and clickable advertising, etc) will begin to be realized. Instead of offering the services separately, the real disruption will occur when the medium is integrated at the network, datalink, and wire level (levels 3, 2, and 1).
As others have pointed out, the Cable TV industry is not ‘hurting’ like the music industry was when Steve Jobs pulled 99-cent track downloads out of his rabbit hat. Google and Motorola can easily provide the technical means to integrate both content providers, broadcasters, ISPs, and Cable TV infrastructure sectors — but the business model of the existing Cable TV market providers will have to evolve and be attractive to the Cable TV franchise. The bandwidth of the Cable TV coax is so high that it will be a very long tme before competitive “last mile” delivery infrastructure (i.e., fiber) can reach the critical mass to replace or threaten coax.
The Cable TV providers see that long-term trend coming, and it will provide them with the incentive to work out new relationships with the consumer to bring new life into the existing coax network.
And can you imagine how the real consumer economy, and the advertising and marketing efforts of any and all manner of business, will respond when it is possible to integrate clickable Internet links on top of, superimposed, and synchronized with, video, movie, and TV programming of all kinds — including commercials, of course ?
It will be absolutely HUGE !
Seeking to manage, or at least measure, financial and systemic risk by a closer inspection, analysis, and even simulation, of financial contracts and counterparties at a more precise level of detail and frequency is a significant departure from many traditional risk management and supervision practices.
The results of more detailed contractual and counterparty analysis — which, taken together to reach the level of the enterprise from the bottom up — offers far better insights into the risk dimensions of both a firm, as well as the financial system, than practices in the form of large-brush composite risk measures and coefficients applied to balance sheets from the top down. The latter greatly reduces the burdens of compliance and regulatory reporting, but it is also much less informative and accurate.
The goals and objectives of accounting practices and methodologies are generally to provide an accurate view of the financial condition and activity of a firm as a going concern, to the extent that one-time events are noted as exceptions, and the effects of other transactions such as capital investment, revenue recognition, and depreciation are spread across a wider horizon of ‘useful life’.
When accounting methodologies aimed generally at showing a longer-term view of a firm’s ‘trailing average’, or smoothed-out, behavior are applied to risk management, artifacts can arise that obscure or mask risk, and the one-time events which accounting practice seeks to footnote should in all likelihood be headline topics for risk analysis such as stress testing.
Some fundamental premises of how best to respond, as a firm and as an industry, to regulatory requests for more detailed contractual and counterparty information:
1) Individual firms should seek to turn what is traditionally viewed as a non-productive overhead cost of regulatory reporting and compliance — in light of the new mandate to provide more detailed financial information — as an opportunity to map contract-level financial positions into a financial data repository that spans the entire balance sheet (and off-balance-sheet) of the firm and allows across-the-board analysis and stress testing using level-playing-field scenarios and assumptions.
Currently, most firm’s product divisions are isolated in operational silos with proprietary systems-of-record formats and potentially incompatible risk management methodologies and assumptions. Implementing this approach will result in risk measurement and management practices which are more timely, more comprehensive, and based on higher-resolution detailed data — a ‘win’ for the firm.
Firms should not view the proposal to create and populate such a database as an attempt to jack up their entire financial operations and insert a new ‘ground floor’, nor to be a replacement for internal data warehousing initiatives. Rather, the model is to allow an interface database to be populated with appropriate mappings from existing systems within the firm.
As such, such a database, though comprehensive and more detailed than traditional G/L reporting data stores, is not ‘mission critical’ , but more along the lines of decision support — and could provide a platform for productive use to that effect internally within the firm as well.
2) The industry as a whole, in conjunction with regulators, should strive to agree on a common standard to represent low-level financial positions and contracts such that each firm does not create its own proprietary version of such a data model, one which then requires further re-mapping and translation (and likely incompatibility) at the level of systemic oversight.
Having the financial industry and the public regulators agree on a common data model, with requisite standardized reference data, will be a “win” for the public good as well, as it will greatly reduce the cost and complexity of making sense of more detailed financial information for purposes of analysis at the systemic level in the Office of Financial Research.
Finally, by implementing a form of distributed reporting repository, if you will, each institution can make the database available on a secure basis to not only regulators but also to internal staff as well as vendors who can supply value-added reporting and analysis tools predicated on the standard model. This is yet a third win for economic efficiencies that would make available better risk management practices to a wider range of institutions who otherwise would not choose or be able to develop such tools.
The increased regulatory requirements mandated by the recent passage of the Dodd Frank Act are no doubt not a welcome development for financial institutions, and the challenges to fulfill the specific functions of the Office of Financial Research as delineated therein would be formidable even with full cooperation from the industry. However, given that the work needs to be done, and time and effort expended, it clearly is in the best interests of the financial industry and the public if the projects can be pursued in a manner that will produce substantial long-term benefits to offset the additional costs incurred.
For all the legitimate and significant issues regarding the future evolution of the Internet which the term “Net Neutrality” is meant to embody, the phrase would appear to have become a bit of a conceptual chameleon with a life of its own, and whose meaning is not particularly clear — certainly not to many people when they first encounter the notion.
Despite the fact that the events and practices which instigated the initial debate are fairly specific, the term has become the distilled cornerstone for a veritable uprising against telcos and ISPs, carrying with it a cachet which is nearly on the level of “freedom of speech” and “human rights”.
When combined with the simple lead-in of “those in support of …”, the term “net neutrality” suggests an air of impeccability, its own “self-evident truth” and standard bearer for a cause which all intelligent and righteous beings should without question support. For that catchy spin alone, whoever coined the term deserves an advertising Cleo award.
But the fuzzy semantics of the term “net neutrality” can be misleading and lead to confusion or distortions in attempting to analyze or discuss the multi-faceted and non-trivial issues and trade-offs which are teeming immediately beneath the self-assured surface of that simple label.
A case in point
For example, in a recent TechCrunch column, MG Siegler writes:
“For net neutrality to truly work, we need things to be black and white. Or really just white. The Internet needs to flow the same no matter what type of data, what company, or what service it involves. End of story.”
Now that is quite a remarkable statement, if you think about it. With apologies to Mr. Siegler for selecting his statement as the guinea pig in this lab experiment, let’s analyze the statement a bit more closely.
In a very short space of words, this statement contains such phrases as “truly work”, “we need things”,”black and white”,”just white”, and “flow the same”.
Ignoring for a moment the absence of a clear description of just what “net neutrality” is that we should be trying to make “truly work” (the assumption is that we all know, of course), the statement that follows which seeks to define just what is required for that to happen is remarkably vague and imprecise. In fact, it is more emotional than logical, and — in tone, at least — resembles the kind of pep talk a coach might give to his team before they take the field in a sporting contest.
In fairness, “we need things to be black and white” alludes to the fact, which no one would deny, that the rules, policies, practices and terms of service of ISPs vis-a-vis their customers should be just that — transparent, clearly documented and understood, i.e., black and white.
The transition to “or really just white” is to say that these rules and policies should not create discriminatory distinctions among customers or types of service, bifurcating them and their respective internet access into two classes: “legacy” and “future”, rich or poor, first class or coach, i.e., black or white. That is a clever segue and turn of phrase.
Caution: thin ice ahead
Where Mr. Siegler runs into some significant thin ice is when he follows with “The Internet needs to flow the same no matter what type of data, what company, or what service it involves.”
This statement may well appear to most people to be an adequate, if not altogether necessary and sufficient, definition of “net neutrality” — but perhaps therein lies the rub with the truly fuzzy semantics of the term.
To be clear, there is no quibble with the “no matter … what company” part of the statement. The most compelling part of the criticisms leveled at ISPs has to do with their documented use of ‘discretionary policies’ to ‘manage’ Internet traffic and the natural concerns over the potential abuse by ISPs of the inherent power and leverage which comes with being in the position of being the gatekeepers of the Internet.
Discriminatory practices by ISPs, including preferential treatment of customers or partners, whether for commercial or even political purposes, would clearly be improper, and have a negative effect on the innovation and economic growth which the Internet fosters and enables, particularly when the ISPs involved are “size huge” and affect millions and millions of customers.
Go with the flow … but fasten your seatbelts
Rather, it is in the declaration that the “Internet needs to flow the same no matter what type of data … or what service it involves” where major turbulence in the “net neutrality” debate is encountered, requiring that we fasten our seatbelts for the duration of the flight, as it were.
It is at this point in our journey down the rabbit hole where the matter of ‘net neutrality’ becomes more turgid, yet multi-faceted, and without clear solutions — not ‘black and white’ ones, and certainly not ‘just white’ ones.
The phrase “flow the same” is where the devil indeed lurks in the details, cloaked in that larger statement which, for all intents and purposes, could well have been crafted by tireless opponents of discrimination in the U.S. Civil Rights movement.
What exactly does “flow the same” mean, and what are the implications of any of its possible interpretations ?
Before proceeding, you may be asking “why so picky”. Rest assured, I am not defending, nor am I a shill, for telcos and ISPs in this debate ! And I am just as concerned as the next person about establishing any precedents or slippery slopes that would lead to the bifurcation and balkanization of the Internet.
But to demand that the “Internet flow the same” regardless of data or service is not only completely ambiguous, it is also a notion with inherent contradictions along multiple dimensions.
In an effort to explore the problems that the fuzzy semantics around this particular notion entail, let’s do some “thought experiments”. Lab coats and protective eye goggles ready ?
In this “thought experiment” , let’s take two scenarios, or snapshots, if you will, of two possible alternative internet universes.
Alternate Internet universe 1.A
Alternate Internet universe 1.A (AIU 1.A) consists of one thousand servers and one billion clients. The principle service in this alternate universe consists of sending and receiving short messages, called “Burps”.
- Clients poll all 1000 servers for Burps addressed to them, evenly spread out over the course of repeated 60-second intervals.
- When a client contacts a server in the course of this polling, a client can retrieve all Burps in their Burp inboxes, respond to or acknowledge Burps retrieved in previous polls, and also post, or deposit, one new Burp of their choosing at each server.
- Finally, Burps are limited to 100 bytes.
Let’s analyze the total “Internet flow”, in terms of both bytes and bandwidth, of this alternate Internet universe across the entire network, shall we ?
The maximum utilization of Internet 1.A occurs when every client posts one new Burp at each of the 1000 servers every 60 seconds, and responds (with a reply Burp) to all Burps retrieved from any and every server at the time of the poll of each server 60 seconds previously.
Remember, the rules of this universe specify that clients can only deposit one Burp per server when they check that server for queued Burps every 60 seconds.
Even though some clients may retrieve many Burps from their inBurp queue, others clients may have none to retrieve.
The steady-state bandwidth per second of universe 1.A in maximum utilization (calculating the total amount of information processed by the network in a 60-second cycle with all clients performing to maximum capacity, and then dividing by 60) in bytes/sec is therefore
[ #clients X #servers X bytes/burp X 3 ] / 60
The factor ‘3’ above is due to the fact the number of burps per client per cycle can be analyzed thusly:
- each client can respond to each retrieved burp (1) with a response burp (2) as well as post a new burp (3), and
- because every client is constrained to posting only one burp per server per polling cycle, the statistical aggregate amount of burps which flow through the Internet of alterrnate universe 1.A is steady-state bounded and enumerable, regardless of the distribution of different number of burps in the queues of different clients.
Plugging in the numbers in the expression above, we get:
[ 1 billion X 1 thousand X one hundred X 3 ] / 60, or
5 x 10 ^ 12 bytes/sec = 40 x 10 ^ 12 bits/sec
Alternate Internet universe 1.B
Alternate Internet universe 1.B has the same number of clients and servers as AIU 1.A. However, the type of data and service provided in AIU 1.B is different than that of AIU 1.A. Alternate Internet universe 1.B provides realtime Channel-Multiplexed Multimedia Stream (CMMS) sharing among all clients.
- Each channel-multiplexed multimedia stream consists of any number of different individual media streams on different virtual channels of the single multiplexed stream, as long as the total allowed bandwidth of the a channel-multiplexed media stream is not exceeded.
- The individual media channel streams are multiplexed together into a single CMMS solely in order to simplify the transport-level management and routing of media channel stream sharing between client and server nodes across the network.
- Each client can stream exactly one CMMS to each server, and can choose to subscribe to up to 1000 multimedia channel streams across the population of servers.
- Clients can choose to subscribe to more than one CMMS to a single server.
- Furthermore, the up to 1000 CMMS which clients can stream to each server can all be unique.
- The maximum bandwidth of a single CMMS, fully compressed after being multiplexed and ready for transmission, is 1 Gigabit/sec
What is the information bandwidth of AIU 1.B at maximum utilization?
This is fairly easy to calculate, because, when each client node operates at capacity, 1000 CMMS streams are originated, and another 1000 are subscribed to and received. If we furthermore assume that every CMMS is unique and remove systemic opportunities for multicasting or redundancy optimization, we arrive at
#clients X CMMS-maxbandwidth X ( #maxstreams-out + #maxstreams-in )
1 billion X 1 Gigabit/sec X 2000 = 2 X 10 ^ 27 bits/sec
Kind of Blue (i.e., “So What”)
Each of these alternate Internet universes have the exact same configuration of clients and servers — and one could have been instantaneously cloned from the other (theoretically speaking) by some sleep-deprived string-theorist-in-the-sky who fell asleep and whose head lurched forward and accidentally hit the ‘Big Universe Morph Process (BUMP)’ that-was-easy button in the middle of a DNS root node upgrade …
If we compare the total system bandwidth of Alternate Internet universe 1.A with that of Alternate Internate universe 1.B, the system bandwidth of AIU 1.B is 5 x 10 ^ 13 times greater than that of AIU 1.A.
That is not merely 10 ^ 13 more data in the network (a pittance). No, it is a 13 orders of magnitude (powers of 10) difference in BANDWIDTH . AIU 1.B is has over 10 Billion times the bandwidth of AIU 1.A, or, put another way, it would take 10 Billion AIU 1.A entire networks to equal a single AIU 1.B network.
Miles and Miles to go before we sleep
The two Alternate Internets have, yes, different data and services. Yet according to the definition of the fundamental principles of ‘Net Neutrality’ so succinctly stated above, “The Internet needs to flow the same no matter what type of data …or what service it involves.”
Alternate Internet universe 1.B would of course have no trouble ‘flowing the data’ of AIU 1.A. But obviously, if the tables were turned that would not be the case at all.
At this point in the infancy of the Internet, we are certainly farther along than 1.A, but nowhere near 1.B — perhaps more like 1.A.2.0. And we are already mixing in a bit of 1.B ‘beta’ into the 1.A.2.0 ‘release’.
In fact, the T3 backbones look a lot like bundles of the ‘CMMS’ multiplexed media channel streams conjured up above. And we have billions of Internet clients ‘burping’ in various flavors of social networks at the same time that businesses are offloading proprietary IT shops to “clouds”, consumer e-commerce is looking for that “next big thang” ahead of Web 3.0.
Meanwhile, the wireless data revolution is, well, kind of bringing the wireless network to its knees just as the first generation of hand-held “pre-CMMS” streamers are being introduced and fanning the flames and whetting consumer appetites and expectations with dreams of sugar plums and voice-controlled robot candy canes.
Here comes the Q.E.D.
It is perhaps the not-exactly-the-same zone of mobile wireless Internet that is the most incontrovertible evidence that the fuzzy semantic “same flow no matter what” notion of ‘Net Neutrality’ is inappropriate, and an overly simplistic star to hitch your Internet wagon to. As a ‘thought experiment’ exercise at the end of the chapter, consider doing an analysis of pushing out the envelope of the Internet Universe in which the terrestrial zone of the Internet grows exponentially in bandwidth because of the essentially unlimited ability to simply lay more and more pipe to do so. Terrestrial pipe has the following magical property: it is truly scalable and bandwidth-accretive, because adding another pipe does not interfere or detract from the pipes already in place.
This just in, in case you did not get the memo: the wireless spectrum, regardless of how cleverly it is compressed, has some theoretical bandwidth limits. Why ? because, by its very nature, it is just one big “pipe in the sky” which is shared, and ultimately contested, by everyone in the geographic area within the cone or zone of the immediate local wireless cell.
When the analog voice cell network upgraded itself from 3-watt phones to 1-watt phones, way back in the oh-so-nineties, the wireless network transitioned from larger area cells to much smaller cells. The main objective was simply to be able to increase the density of mobile subscribers per area and thereby allow a large increase in the total number of wireless subscribers system-wide. The auction and re-purposing of electromagnetic spectrum by the FCC, making more spectrum available to the wireless industry, is a welcome, but ultimately finite, addition of bandwidth and headroom to the wireless space. And, as we all know, for every headroom, there is a max.
All you need to do is extend out the timeline sufficiently, and you will (likely sooner than we think) reach the point where the growth in wireless bandwidth is again constrained, and we ultimately run out of road.
The only strategy which will mitigate the trend to saturation of bandwidth within a given size cell and a given fixed spectrum is the same one described above regarding the first mobile analog voice cell network. Namely: reducing the effective size of the cell so that the exponentially increasing bandwidth of the terrestrial Internet can, by extending its tentacles and hubs into a larger number of lower-energy spaces, make more bandwidth available to a smaller number of users but in a much larger number of micro-cells.
This strategy, or one very similar to it, will be required to continue expanding and improving the performance, reliability, availability, and bandwidth of the wireless domain for a simultaneously growing population of consumers, types of data, and services.
But to think, maintain, or imagine that the amalgamation of terrestrial and wireless interconnections to the Internet will “flow the same”, no matter what type of data or what service, is merely a recipe for ongoing angst, disappointment, and internet class warfare, in my opinion.
Chocolate or Vanilla ? or … Coppa Mista ?
Yes, we must collectively find our way, with consumers holding the feet of both content-providers and ISPs to the fire by voting with their feet in a robust and competitive market economy.
It is the factors which impede and interfere with the power of consumers – individuals and businesses alike — to collectively and competitively shape the outcome of the next wave of the Internet’s evolution which are the real issues and dangers that threaten the prospects on continued survival and prosperity of an innovative and open Internet.
It is not quite as ‘black and white’ (or chocolate and vanilla) as saying that the ISPs (and their purported Vichy Regime collaborators) are wearing the black hats while the rest, actively championing ‘Net Neutrality’ (whatever that really means ultimately), are wearing the white hats.
The fuzzy semantics of just what “Net Neutrality” is thought to be are not necessarily helping, either, despite the critical importance of the issue, the debate, and what happens next.
Did Google turn its back on tech and run up the sidelines with Verizon just to plant its flag in future territory ?
Was the proposal aimed at pre-empting the FCC, or just gaming them to forestall regulation ?
Is a compromise between corporations arguing over the spoils of the internet possible ? Or have they jammed the gears of the free market ?
Will ISPs be stretched, squeezed and deformed by consumers and producers, or merely flattened by the FCC ?
Just how much tea should the tillerman get to ferry our internet traffic across the river?
Setting the stage: the Google-Verizon ‘Net-Fest’
On Monday, August 9, 2010, Google and Verizon released a joint Legislative Framework Proposal, offering a “proposed open Internet framework for the consideration of policymakers and the public”. The following day the two respective CEOs, Eric Schmidt and Ivan Seidenberg, submitted an op-ed piece to the Washington Post to add some narrative background to their initiative, which began:
We have spent much of the past year trying to resolve our differences over the thorny issue of “network neutrality.” This hasn’t been an easy process, and Google and Verizon are neither regulators nor legislators. But as leaders in our respective fields, we have searched for workable public policies that serve consumer interests and create a climate for investment and innovation. What has kept us at the table and moving toward compromise was our mutual interest in a robust Internet and our recognition that progress would occur only when players from across the Internet space work together.
This naturally has generated quite a bit of reaction from multiple points on the compass. The ElectronicFrontier Foundation weighed in thusly.
Update: Richard Whitt, Google’s Telecom and Media Counsel in Washington, D.C. has posted some “Facts about our network neutrality policy proposal” on Google’s Public Policy Blog, prompting this response from Karl Bode at DSLreports.
As popular as it appears to be to bash either Google, Verizon — or both — regarding their joint initiative, the fundamental issues which make the net neutrality debate what it is are not going to go away ultimately without some form of negotiation or cooperation among the major players and their respective back benches — which includes content providers, tech companies, ISPs, the FCC, and the public.
For consumers, watching the positioning and movement of the still-fluid ‘tech-tonic’ plates that impinge on this debate can be a lot like witnessing a battle between Godzilla and Mothra without knowing who the good and bad monsters are — or if there is even that distinction among monsters — and all the while becoming anxious about whether or not the Public Defense Forces will be able to tame the beasts simply with small-arms fire or risk having to blow the entire city out of the water by desperately resorting to far more destructive ordinance. In such ultimate fight club-type struggles it is hard to see clear to optimal outcomes for peace, prosperity, and the public good.
Although it is easy to see some (many?) of the unstated motivations and agendas of Google and Verizon behind their arm-in-arm why can’t we all just get along locker room embrace, it is also true that some degree of industry and private sector cooperation on the question is needed in order to sort out the legitimate concerns on all sides. Hopefully, constructive efforts will converge on a workable solution without requiring that the government play the role of Solomon and threaten to cut the baby in half as well as throw out the bath water.
“Workable solution for whom?” one is quite right to ask, of course. Is it for the corporations sitting across the table with the revenue / expense pie of the huge internet market between them — which they seek to carve up — or for the consumer who provides the demand and ultimately pays for the continued growth and prosperity of a thriving pie-making economy? (memo to self: in my half-baked metaphor, is the ISP the crust and the content provider the filling, or … oh, nevermind )
Both Google and Verizon have something to gain by scratching each other’s backs. Google is much, much more than just a ‘content provider’, of course, and much more than just a ‘tech company’, as they clearly have designs on the communications infrastructure side of the coin that flips traffic into their waiting arms (jaws?). Verizon also seeks to go beyond its role as just a sprawling network of pipes and antennas and grow to manage/sell content, although the bloom has come off of that rose as the competition among the network players has made it table, for the time being, that expansive wish list agenda.
Furthermore, Google and Verizon are just place holders for similar industry players and roles. Apple and AT&T fit the M.O. as well, and are no doubt rubbing their chins as a result of the joint announcement, but they will likely watch to see where the dust settles before venturing further into the mine field.
It would also seem that the ‘proposal’ by Google and Verizon was a foray driven somewhat by game theory, to wit: best to keep the private sector options open to work out mutually optimal solutions to the issue and keep the FCC and the government from having to make a definitive, across-the-board ruling in the absence of visible progress on the matter by the private sector — a ruling which is more likely to be bad for one side or the other, just not clear which.
Fundamentals of the Internet-enabled Economy
Let’s not lose sight of the fundamentals of the internet-enabled economy as we try to decide who to root for (or boo) in the debate, whether the opposing teams or the referees.
1. The Internet is no longer a DARPA science experiment, but has become THE information utility and enabling technology of the modern era;
2. The skyrocketing growth and benefits afforded by this now-ubiquitous infrastructure and services platform which spans the globe are due largely to three primary dimensions of the Internet phenomenon:
(a) The explosion, in general, of applications, services, content, and online information which the platform enables, and which is also fueling and increasing consumer demand and expectations in a chicken-and-the-egg fashion;
(b) The efforts on the part of the infrastructure developers (i.e., ISPs and Telcos) to keep pace with this ongoing and increasing demand and, at the same time, competitively maintain and grow their respective market share;
(c) Rapid technological innovation in the functionality, capabilities and performance of the part of the so-called ‘content providers’ and tech companies in the supply chain to continue to deliver services, web resources, information access, and user experiences to consumers and businesses at the leafy ends of the Internet tree of life.
This business and technology nexus of innovative supply-side vendors and corporations responding to consumer and business demand for any manner and variety of internet-enabled solutions and services is populated by an ocean full of creatures large and small. There are a relatively small number of extremely large ones like Google, and an extremely large number of relatively small ones — the agile and fast moving entrepreneurial startups seeking to carve out their niches and be the next big break-out internet success story.
The smaller Internet and technology businesses and technology vendors have concerns regarding how their fates might be impacted if (and how) a giant like Google happens to willfully or inadvertently rolls over on them. This apprehension, or at least the uncertainty, is one basis for many of the criticisms of Google’s ‘participation’ in the joint proposal with Verizon voiced by some in the tech sector.
3. Quality-of-service, and distinctions between different classes of communication modes, are considerations which were, along with the connect-to-anything-anywhere design cornerstone of the Internet Protocol (IP), factors from the outset in the design of the Internet. The higher-level protocols on top of IP, such as TCP, UDP, and RTP, as well as the many flavors of physical network and data link protocols and network architectures, were developed by computer scientists and communications experts who spent considerable effort to elegantly solve the technical and usability challenges in robust ways.
For example, in the (now disbanded) 802.6 Metropolitan Area Network spec (essentially a token-passing, wide-area bus architecture which time-division multiplexes access to a deterministic network access protocol), access to the network for higher-priority traffic with realtime performance requirements (e.g. interactive telephony) was insured by allocating a portion of the network bandwidth (in the form of designated, reserved time slots) for services with minimum quality constraints. The provisioning of quality of service protections meant that network traffic of this type would not be denied service or minimum-required bandwidth as a result of the network being swamped by less time-critical and longer running data transfer processes and queues.
802.6 has been deprecated and is no longer deployed, due to the advances in bandwidth and higher performance which the IP network is now able to provide via standard [802.3 (ethernet), 802.11 (wireless), 802.14 (cable modem)] data link protocols. There are a host of new link-level protocols which have been specified for new uses, mainly for wireless spectrum in one form or another (see the many varieties of IEEE 802 link-level protocols at http://en.wikipedia.org/wiki/IEEE_802 ) . IEEE 802.23, for example, is a new working group devoted to Emergency Services — clearly an area where quality of service and performance guarantees and priorities are extremely important.
Furthermore, even though the Internet protocol (IP) can be made to (logically) work seamlessly across interfaces to new data link technologies and architectures, it is certainly not the case that any given end-user should expect or demand equivalent performance and services across all data links and end-user interface equipment.
It is also reasonable to expect that such protocols as emergency services, would, if routed for some portion of its network connectivity via IP networks provided by ISPs, have a different quality of service than a long-running, background download of an e-mail archive, for example.
Consumer demand and vendor supply: a growing gap
We should be very careful, in this whole ‘net neutrality’ debate, to — a priori and out of the starting gates — think that ALL uses of the internet communication platform are, or should be, completely and anonymously equal in priority, access, and performance, and furthermore that all services and transports must necessarily be of equal cost regardless of its mode of use .
Clearly, there are some distinctions that can and should be made regarding both quality of service and the associated cost of contracted and/or assured types and priorities of access to communications services across the broad gamut of public, private, emergency, commercial, consumer, recreational, and even national security uses.
4. Consumers and content providers are collectively placing increasing demands on both ends of the collective pipes of the internet infrastructure. That is a simple and obvious fact of life. The providers of that internet connectivity — the cable companies, telcos, and wireless providers — are also in a position to profit by being in the position to provide and charge for that connectivity. Where is the fair balance, and how to achieve it? How much tea should the tillerman get to ferry us across the river, in other words.
So far, the commercial interests which use the Internet infrastructure to sell and deliver their services, and consumers which avail themselves of same, have been relying upon so-called free market competition to manage the cost/benefits tradeoffs and pricing of internet connectivity and level of service, ranging from dialup to T3.
For the consumer, at the leafy, fruit-plucking end of the sprawling, gateway-connected amalgamation of ISPs who shuttle traffic anywhere around the world, the ‘competitve’ choices usually boil down to choosing between cable, DSL, and – lately – wireless, and — within that — a subsequent choice of price for a certain level of ‘bandwidth’.
Unfortunately, both the competitive choices and the price-for-bandwidth options currently have undesirable artifacts and serious shortcomings for the consumer, a situation which has left, and is increasingly producing, levels of discontent and dissatisfaction among consumers.
For starters, there is usually only one cable company in any one market, since only one company owns the cable, and multiple ISP ‘competitive access’ to the cable is, however desirable, slow in coming and certainly resisted by the cable industry.
Also, even though telcos, such as Verizon with FIOS and AT&T with Universe, are trying to leap out of the bandwidth constraints of DSL and compete more effectively with cable in urban areas, the capital investment required to provide such last-mile, truly broadband point-to-point internet connectivity to the home ( and thereby compete with cable ), is significant, and even more so in rural areas.
Furthermore, regardless of the choice of ISP, price-point choices for levels of internet service predicated on levels of bandwidth are often moot.
In the case of cable company ISPs, it does not much matter if you are signed up for (and theoretically promised) 5 MB/sec if enough people in your neighborhood are saturating the local network segment bandwidth of the cable ( it *is*, after all, a shared-wire, collision-avoidance, ethernet wire protocol, albeit multiplexed into video bands of the cable).
Telcos can deliver more deterministic levels of bandwidth between the central office and the consumer, but their equipment will also ultimately face traffic congestion issues as well, namely in their switches and gateways which must aggregate and route traffic and then interface with the fatter pipes that flow into their wide-area networks.
As for wireless, anyone who has used a mobile phone knows the frustration and miserable experience of poor connectivity and dropped voice calls, regardless if the color of your ‘map’ is blue, red, or any other color. The upsurge in smart phones has added further burdens and bottlenecks to wireless networks, spurring changes in billing plans as well as customer complaints. The prospect of being asked to pay more simply for reliable service is another source of fuel for customer concerns regarding the plans and intentions of ISPs and wireless telcos.
Be careful what you wish for
5. I completely concur with those who would not allow the ISPs to impose arbitrary, and certainly undocumented, decisions about which types of internet traffic to block, restrict, or selectively throttle back in order to manage total demand and maximize the number of satisfied customers (and hence, revenue – their objective function).
However, a very unsatisfactory and unsavory outcome will ensue if we do not recognize that there is a legitimate need on the part of the ISP to be able to provide a consistent level and quality of service as part of a consumer’s contract with an ISP — and allow ISPs the means to provide that quality of service for a price that supports the necessary capital investment required to keep up with the market demand for a given level of supposedly ‘guaranteed’ bandwidth.
Failing a resolution to the ‘net neutrality’ battle that is fair to ISPs as well, we are likely to reap an undesirable and regressive outcome. We will drive the ISPs business models into territory that we will ultimately come to hate (and which is already raising its ugly head): pricing by data volume in lieu of pricing by bandwidth.
If ISPs are forced, one way or another, into being regulated like a public utility such as water or electricity, then we will get similar results: pricing based upon how much informational commodity you send or receive over a slow-growing and under-capitalized set of infrastructure, just like water or electricity.
ISPs and telcos will be able to continue growing revenue with a minimum of capital investment aimed at improving the capacity and system bandwidth of their local and backbone infrastructure. The ISP revenues will simply ride up with the growing traffic and will provide that revenue growth — at least in the short term. The longer-term effects of this business model will be disastrous, and will be a significant drag on the continued growth and capabilities of the Internet, for all parties involved, as well as for the economy as a whole.
Wireless telcos have long had this model for data, and AT&T, reacting to mushrooming demand on its network from iPhones, has rescinded the ‘unlimited’ data plan for new iPhone accounts. Time Warner and other ISPs are already conducting ‘trials’ of measured-data pricing models for residential accounts. This practice is full of pitfalls for content providers and consumers alike, who are of course the camps fervently holding up the banner of zero-tolerance ‘net neutrality’.
An approach to addressing this issue that does allow a degree of cooperation among the major players on both sides, and the recognition of the realities of what is required for different levels of service across the Internet food chain is needed. Absent workable solutions, the consumer will eventually be worse off than before, for any number of reasons.
Yes, the consumer must have a voice that has some weight, and — short of voting with our feet in the currently semi-flawed competitive marketplace — the FCC is the likely, if not exactly first choice, to help provide that voice and keep the players from playing dirty. And yet falling back on the government to somehow omnisciently and optimally pick a live rabbit from the bureaucratic black hat is itself fraught with unintended consequences and precedent.
But the challenge of keeping the growth in the explosive potential and innovative offerings enabled by the Internet in some degree of elastic sync with the connective infrastructure that is capable of keeping pace with that growth is a non-trivial problem, and one that will not be solved by an all-or-nothing campaign which seeks to make any one stakeholder the evil party.
Unless we are level-headed, the ultimate ‘net neutrality’ could well be a very level, but very unsatisfactory, playing field — one where consumers and businesses alike are billed with a simple metric that resembles nothing more than number of bytes transferred X effective speed of transfer. Something like that is sure to give businesses and consumers alike, and the entire internet economy, a migraine headache.
The Internet is not a data storage resource, to be billed based on how many bits or bytes it is asked to ‘save’. Increasingly, the Internet is a temporal experience, whose relative value and reasonable cost to the consumer are driven by any number of factors other than metered bytes.
It would certainly seem that some accommodation to allow different classes and quality of service — where the cost of a certain level of performance for broad classes of service was a fixed cost which could be budgeted and pegged to an assurance of continuous bandwidth appropriate to the type and class of service and desired performance — would be a good thing, whether that service be home wireline streaming video-on-demand, background large file transfer, e-mail, interactive web collaboration, or wireless multi-media access.