Date: Mon, 6 Jan 1997 20:03:17 -0800 (PST) From: Phil Agre To: rre@weber.ucsd.edu Subject: Interview of Vint Cerf Reply-To: rre-maintainers@weber.ucsd.edu [Interview of Vint Cerf by Dan Tebbutt .] =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= This message was forwarded through the Red Rock Eater News Service (RRE). Send any replies to the original author, listed in the From: field below. You are welcome to send the message along to others but please do not use the "redirect" command. For information on RRE, including instructions for (un)subscribing, send an empty message to rre-help@weber.ucsd.edu =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= [http://www.apcmag.com/profiles/213a_1ce.htm] [Copyright 1996 Dan Tebbutt/Australian Consolidated Press.] FATHER OF THE INTERNET by Dan Tebbutt 01/01/97 -- Vint Cerf is not exactly a household name -- but he should be. Widely acknowledged as the father of the Internet, it was Cerf, with research partner Bob Kahn, who in 1974 invented the TCP/IP protocol suite that drives the global Internet. Cerf has dedicated a lifetime to research and development that has extended mathematics and computer science, both in academic and commercial applications. _________________________________________________________________ Navigation Section Home This Week This Month Recent Topics All by Date Current APC * Introduction * Internet II: Putting Power Back into Academia's Tower * Making Real-Time Services a Reality * Is the Internet Infrastructure Adequate? * Is Videoconferencing Viable on Today's Net? * Doomsayers Predicting the Collapse of the Net? * Geodiversity: a Global Balance for the Internet * An Asia Pacific Backbone * Effects of Regulatory Liberation * ATM and the Internet: Marriage or Mirage? * Gigabit Ethernet, ADSL and Cable Modems * Them and Us: A Lack of Symmetry * Internet Telephony and Economics of Voice over IP * Going the Distance: New Charging Models for Hybrid Services * Regulating the Industry: the Proper Role of Government * How Can Regulatory Bodies Possibly Cope? * Security: the Bugbear of Internet Commerce * US Encryption Policy * Security: Psychological or Technical Issue? * Why Java's Not the Most Exciting Technology on Net * Network Reliability: It's Still Not the Phone * IPv6: Now or Never * Copyright and the Distribution Medium * Censorship at the Periphery * Embedding Technology in our Lives * The Internet's MTV Generation * Real Cyberspace in our Lifetime * Divorced from Responsibility * Next Phases for the Internet * A Quantum Leap for Search Tools? * Networks and People with Disabilities * Father of the Internet Introduction After a decade cultivating packet networks for ARPANet, the precursor to the Internet, Cerf joined US telco MCI Communications in 1982 where he designed the MCIMail global email infrastructure. Cerf spent the latter 1980s reunited with Kahn at the Corporation for National Research Initiatives, leading the Internet Architecture Board and founding the Internet Society in 1991. Cerf served as ISOC president until 1995, meanwhile rejoining MCI in 1994 to oversee the company's burgeoning Internet business. His current position is MCI's senior vice-president of data architecture. "Engineers, most of the time, aren't poets," Cerf once wrote. As if to prove what an extraordinary engineer he is, Cerf shows a dab hand at poetry, whether parodying Shakespeare or explaining Net operations. He writes a popular Web column with considerable literary flair and keen ideas. His recent 'Future Imperfect' combines visionary ideas and accessible style in a fashion reminiscent of the great Arthur C Clarke. Against such imposing credentials, APC was delighted to find an articulate and knowledgable gentleman uniquely qualified to express fascinating views on the future of the Internet. Internet II: Putting Power Back into Academia's Tower APC: What are your thoughts on the Internet II initiative: where do you see that coming from and what are your expectations for advanced development that may flow from it? Vint Cerf: Actually there are three different initiatives that I understand are going on and they've got slightly varying aims: everybody is confused -- including me. My understanding of Internet II is that this is sort of a continuance of the High Performance Computing and Communications (HPCC) initiative which has been going on since the early 1990s. It involves several agencies of the US government that have historically been involved in supporting computer communication research, including ARPA, the National Science Foundation and Department of Energy and NASA. (I know there are other agencies involved in HPCC so I am sure that they have some role in Internet II.) I take this initiative to be a continuing investment by the government in high-end, high-functionality and high-performance research. It's often the case that industry isn't ready to make significant investments in some of these advanced technologies because they're still so risky that they can't quite see a product or service returned on them in a reasonable amount of time. And yet everyone knows that you need to do something, to spend some money, in order to get new capabilities on the table. So I see this as another one of the steps the government can take at the front-end, having demonstrated functional capabilities [of] some of these new very high-speed and often special-function ideas, things like real-time services and the like. Once the research community has been able to try things out and perhaps even come to some tentative agreement on standards, the product manufacturers can go to town and the service providers like MCI can begin to buy those products and install them. So I'm looking forward to Internet II producing terabit-speed networks, for example, and hardware switching -- maybe even optical switching -- as a technology that we can ultimately apply. I think of it as largely a continuance of the High Performance Computing and Communications effort. APC: I'm interested in the academic side of it, the universities bandying together again to restore the research and educational functional of the Internet which has been lost by the commercialisation of the Net. VC: It's very interesting. There's two things going on here and I'm not sure to what extent Internet II and this initiative by the universities quite match up. The universities and other research labs almost certainly will benefit from the Internet II initiative to the extent that they do research on computer communications. But university interest is broader than that: there's great desire to make use of it as an infrastructure to support science and research and education. To be quite blunt about it, I think universities have become accustomed to quite extraordinary quality service because they were in early on a network that wasn't as heavily loaded as the public Internet is today. Wistfully looking back on that, saying, 'gosh, we miss the days when there was more than enough resource for everyone and it didn't cost us very much because the government was part of the subsidising agent.' I think that's what they're hoping will come out of this. However, it's my understanding that a number of universities have agreed to putting in a substantial amount of money into a buying consortium and this money is separate from the Internet II initiative that the government is funding. I think of the government initiative on the research side as distinct from the university efforts which look more like a buying consortium for capacity or a high-speed university environment network. I think that's perfectly fine: there's nothing wrong with the universities working together to set aside capacity that they can use to communicate with each other. I think they've all begun to realise that the economics of this aren't trivial -- when buying 155Mbps service you're getting into serious investment. I don't know exactly where that's going to end up but I can appreciate the motivation that the universities have. I think of the next-generation Internet as sort of falling into this category, except the government is proposing to use that program to help bring some of the low-end, K-12 components of our educational system and libraries up on the Net. That would be complimentary to the university initiative. ... Some of my confusion has come from the fact that a figure of $100 million kept popping up in all of these announcements -- I hope it wasn't the same $100 million being counted three different ways! ... It wouldn't be the first time that happened. I read it that the research component was on the order of $100 million already accounted for in large measure by the research budgets of the various research funding bodies; the education and library initiative another $100 million figure but separate from that, very possibly new money; and then the university contribution which came to something like $65-$100 million over five years, possibly somehow intertwined with the federal money. Making Real-Time Services a Reality APC: If we can talk about the services you'd hope or expect to see emerging from the Internet II set of initiatives -- things like multicast and you talked about real-time video and audio. What are your hopes and expectations on that front? VC: Actually we need to look in both the commercial direction as well as the research direction for some answers. On the commercial side there is a considerable effort going on at companies (MCI, Cisco, Intel and others) to try to actually build multicasting capabilities and differential quality of service functionality into the existing Internet technology. It's very hard, it turns out: this is definitely not a trivial proposition, especially when you look at the way in which the implementation has to go. It has potential scaling problems; when you scale things up to very large numbers of participants sometimes the complexity is not completely linear -- it's more than linear. The computational requirement or the amount of information that has to be moved around on a wide area becomes more than linear. There is tremendous effort going on in the vendor community, but there also needs to be serious work in the R&D side. There may be technologies which R&D people can look at which aren't ready for prime-time use yet. Some of the optical ideas are quite intriguing: if one can truly switch packets without ever turning them back into electronics that's quite dramatic. You also have the possibility of handling enormous bandwidth simply because you're just moving photons around -- we're talking about switching gigabit streams without a great deal of trouble! Per-packet switching in the optical domain is mind-boggling to say the least. I have looked at designs (I even have a patent filing on one design) for doing all-optical packet-switching, but every time I think about what it takes to do it I think, 'My God, it sounds like Buck Rogers.' I'm expecting to see two really important things emerge on the R&D side. One of them is the ability to build a very large-scale -- a global-scale -- multicasting capability; the ability to join and leave various multicast groups almost as if you were switching the channel on a television set. What you should bear in mind as you think about this is that it's not quite like switching the channel on the television because there you're only a receiver of communications, and in the multicast system it's possible for you to be a contributor as well. The mechanics for allowing many, if not all, of the multicast participants to not only receive but also to send is one of the things that contributes to the complexity. We'll see a lot of focus on support for real-time services mixed together with non-real-time requirements. We will need, for example, to label packets differently depending on how they should be treated in the interior of the network. I know you don't want an engineering tutorial here, so let me just say that there are some quite complicated technical problems associated with handling mixtures of real-time and non-real-time packets that are, in my view, not yet solved. APC: Do you think optical switching is capable of analysing packets and switching them fast enough? It's the analysing space that becomes a bottleneck, isn't it? VC: Let me try this and you tell me if it makes sense. People often think about these real-time classes of service in the context of the equivalent of television broadcasting -- a single source with many receivers and a need to get all the packets delivered quickly so that people get continuous sound or continuous imagery. So they imagine that there aren't too many sources: you know, 50 television channels, pick one. Many, many recipients, not too many sources. We think that's only part of the story. We think that if Internet telephony ever takes off that there may be many, many pair lines closed to traffic through the network. Imagine for just a moment what it might mean for the collection of switches (the routers of the network) to know something about every flow of traffic. If there are four possible participants in the Net, and they are involved in various and sundry different flows, there are 16 different combinations of possible flows. Even if only a small number of those combinations actually happen, then the number of flows about which each router might need to know becomes extraordinarily big as the total number of participants in the network increases. Any particular router might have to know about thousands -- if not hundreds of thousands, if not millions -- of different flows. And we all think that's crazy. APC: They're all having enough trouble coping with domain names. VC: Right. So there has to be a way of building a system so that only the edge routers know about these various distinct flows, but the interior of the Net somehow coalesces all of that into a few classes of service. These guys get priority over everybody else, these guys get dropped when you run out of capacity. A great simplification of the interior is needed to avoid the exponential explosion (or at least the quadratic explosion) in the amount of information on the number of flows. ... Is the Internet Infrastructure Adequate? APC: Would you see some sort of specialisation or microsegmentation in terms of the infrastructure setup, so that there would be separate links for real-time services and for store-and-forward services? VC: Actually, not quite in those terms.What I am expecting is that we will succeed in simultaneously carrying mixtures of real-time and non-real-time traffic, [we'll] find ways of giving priority or precedence to the real-time packets. The one place where I do expect to see something happen is that the switches that we wind up using may very well have both circuit as well as packet capabilities. In the telephone network it's often the case that if you observe the flows of traffic you conclude that pairs of central offices should be directly connected to each other and not go through tandem switches or anything else. So you will go in deliberately and put in high-capacity links. A similar phenomenon could emerge on the Internet when you start to see pairs of sites exchange really significant amounts of traffic, we might conclude that there isn't any reason to go through intermediate packet switches -- we'll just put in a circuit and we'll use the circuit-switching capacity of these devices to simply pass the stuff directly from point A to point B and not bother going through the process of doing store-and-forward because it isn't necessary. I'm not arguing that suddenly circuit-switching will replace packet-switching but rather that when you get a high enough density you assemble all the packets that are required to go in a particular direction and you take them on a circuit to the target. That's the sort of thing I would initially expect. I'm also looking forward to some other media showing up in our mix; for multicasting in particular we'd be looking at one-way cable and one-way satellite as a very common broadcast method for doing multicast delivery. Of course, it's very efficient because the medium itself takes care of replicating the packets. The hard part is that after you've completed the broadcast and it has reached the ground station or cable interface, if there is a store-and-forward subnetwork now waiting to receive this traffic and deliver it to many different targets, now you do have a store-and-forward multicast to do -- that's not so easy to get up. Is Videoconferencing Viable on Today's Net? APC: Do you expect things like videoconferencing and document-conferencing to be realistic within the short term across the current Internet? VC: Yes, I am expecting two things. I am less persuaded yet of the videoconferencing capability. Many of us have used videoconferencing of one kind of another, whether it's the Internet or through other dedicated services; most of us I think don't find the video aspect of the conferencing terribly useful. What I would say is that the one thing that videoconferencing can do for you that audio bridge conferencing does not do is helps you to see who wants to talk next. Sometimes that makes for a much smoother conference, but it's not the same as sitting around the conference table where you interrupt each other; it doesn't do quite as well. Many of us are discovering a very different mode of operation where we send material to each other ahead of time as email attachments (as you have done in this case) and then get on a telephone bridge and you talk about something you're all looking at. The next step up from that is a shared white space that everybody can see at the same time and can modify at the same time in a conference where the audio part is on a conventional bridge and the shared white space is done by way of Internet. The third case is one where the white space and the audio portion are done through the Internet, and I expect that we will see more of that as time goes on. Companies like mine are quite interested in the combined case because, for one thing, if you have only one link from home (only one telephone circuit) and you were on the Net looking at something and you wanted to call someone in a customer service centre about what you were looking at, it sure would be nice not to have to hangup, make the phone call, lose your place in the Web and not be able to see what you were looking at and wanted to discuss. A customer service arrangement where both parties can see the same Web material but can also talk to each other all over the same line seems quite attractive. Doomsayers Predicting the Collapse of the Net? APC: I read that you thought current statements about the collapse of the Net were probably alarmist. Do you think that the current Internet infrastructure is able to cope with the current services you expect to emerge over the next two or three years? VC: I'm not quite as pessimistic as [3COM founder] Bob Metcalfe, but there are two things that are really quite challenging. One of them is dealing with real-time services -- it's technically quite challenging. Small amounts of it work already: the radio transmissions work quite well in fact because there isn't a real-time interactive requirement. I am expecting quantities of this real-time stuff are going to be hard to deal with. Setting aside the real-time traffic for a moment, the absolute traffic growth we're seeing in our backbone is something like 300-500% per year. It's very, very challenging. MCI's network has been increased in capacity by a factor of 15 since we first introduced it in October 1994. We started at 45Mbps, now we're running 622Mbps in our backbone and we are going to be bringing up [SONET] OC-48 which is a 2.4Gbps service some time probably in the second half of next year [1997]. That is requiring pretty significant investment on our part. The technology for handling that kind of capacity is there in the fibre -- we can do 10Gbps fibre now with wavelength-division multiplexing, and we are anticipating 40Gbps for fibre at the end of the decade. The hard part will be the switches, getting routers and IP switches that will run fast enough to handle all that is the most risky aspect (setting aside the whole question of real-time services). APC: From what I understand, the link Telstra is planning to install by Christmas on the Pacific link is going to mean they have more Internet capacity than voice capacity on that link. VC: I must say that the Telstra ambitions or plans there are quite dramatic. They are looking at well over 128Mbps trans-Pacific capacity coming to the United States some time in the early part of 1997. It's very impressive, and also apparently very necessary. I must tell you I have the highest regard for Geoff [Huston, manager of Telstra Internet Services], he's one of the best engineers and possibly one of the best managers I've ever met. Geodiversity: a Global Balance for the Internet APC: This brings us onto the question of global balance. I'm interested in any thoughts you have on evening out the network and making it truly global -- at the moment it seems to be that the US is home plate for everything. There's such imbalance around in terms of capacity that the US is the heart by practical necessity. For an Australian company it's often better to put their Web server in the US if their target market is over here. I'm wondering what your thoughts are on how the network infrastructure will be balanced out, and how we can overcome any physical impediments to (the concept of 'biodiversity' was around a couple of years ago, so I'm calling it) 'geodiversity' on the Internet. VC: Two obvious things. First of all the tariff structures for dedicated circuits have long favoured bring circuits to the United States instead of other countries. Instead of going from Burma to Indonesia it might be cheaper by way of the US than it is to go direct. I don't know whether those tariff considerations are going to change very quickly, but I think there is much pressure to bring international tariffs more into line with cost than they are today. Today they are quite high, especially in Europe but I suppose also in the Pac-Rim. That will change with time and as those tariff structures change I think the economic attraction of making a more richly connected network -- one which doesn't make the US look like a hairy billiard ball -- is likely. We can see some erosion in tariffs already starting to take place in Europe in anticipation of competition in 1998. Similarly I'm seeing a considerable amount of growth and competition in the various Pac-Rim countries where there had not been competition before -- Japan is quite a dramatic example of that. Some of the Japanese companies (I'm thinking of KDD in particular) made an extensive effort to connect other countries to Japan and then to go across the Pacific to the United States. So as an example, KDD is creating new infrastructure outside the US. Many countries have expressed a desire to serve as hubs -- Singapore is one, Indonesia is another, Japan certainly. We're seeing similar kinds of infrastructure growth in Europe, [but] it's slow-going there because of the historical monopoly. Restructure of telecoms in Europe was quite a struggle but that does seem to be coming. In South America we're beginning to see similar interest, starting in the academic community but ultimately reaching the telecommunications world. APC: I just read my first Web server in Argentina the other day. I have a friend down there and I looked up her service provider to try and work out her email address. VC: We are starting to see significant growth in that part of the world and I world and I would anticipate increasing amounts of regionalisation of backbone as the tariff structure comes back to cost-based pricing. An Asia Pacific Backbone APC: I know AT&T have started to build what they're calling a Pac-Rim backbone from Japan, Hong Kong to Thailand and Malaysia and down to Australia. They've been promoting that heavily in Australia. Is MCI undertaking any of those specific regional initiatives? VC: As a matter of fact we have a more global one with our partner British Telecom called Concert Internet Plus. Concert is our joint venture with BT and a 20-country backbone is in the works. [This interview was conducted shortly before BT acquired MCI in a corporate merger in November.] APC: What's the Asia-Pacific component of that? VC: We've announced that we will bring up facilities first in Australia and Japan. We have visited but not yet concluded what the next tier of countries will be in the Pac-Rim but the likely candidates are Singapore, Taiwan, Indonesia, Korea. China is of considerable interest and I don't know where that's all coming out. I know there is a big Internet initiative in mainland China, [they're] building a 45Mbps Internet backbone connecting all the provinces together. I hope to have an opportunity to meet with the Chinese officials soon to learn more about the progress being made. The investment for telecommunications in China are quite significant. APC: Is that being mainly envisioned for government use, military use, for research or for commercial ventures? VC: My understanding is that this was a commercial initiative. The intent was to make it available primarily to businesses, although I'm sure it couldn't be ruled out for government use. Plainly there are restrictions this Chinese Government would like to apply on the WAN, there's concerns over content and the like and that's going to colour the rate at which the general population gets access. It seems to me the business community is forging ahead with a good deal of interest in having that service on a commercial basis. Effects of Regulatory Liberation APC: What do you see as the impact of regulatory liberation around the world, particularly in Australia? VC: ... One thing I would observe is that competition is becoming by far the most common choice by policy in most countries of the world. There are only a few exceptions where there isn't some evidence of that. Some of the exceptions might include Singapore, for instance; even there, there may be some willingness to have Internet competition even if conventional telephony remains a monopoly. Of course, MCI, being born in competition, feels strongly that this is a good way to go. ATM and the Internet: Marriage or Mirage? APC: On to another topic, ATM and the Internet. For probably the last two or three years, I've been reading about ATM and the Internet and how it's an inevitable marriage. I'm wondering what are your thoughts on it and whether it may turn out to be bypassed by another technology. VC: Well, first we could make a little word pun here if you wanted to say: 'ATM and the Internet -- an inevitable marriage, or a mirage?' I need to be sort of two-faced in my answer to you. First let me make sure you know I have been rather vocally suspicious of ATM technology. I am not an Andy Bechtolsheim, I don't think ATM is dead: the way [his] argument read, as I looked at his recent interviews anyway, seemed a little on the self-serving side because he's supposed to be building a gigabit-speed Ethernet. But I think he addressed his morbid remarks to a subset of the ATM market and not the whole: he spoke of ATM being dead, but he meant (I think) perhaps ATM to the desktop. I can appreciate that current technology makes ATM interfaces, especially optical ones, a little on the expensive side compared to Ethernet. APC: From my perspective the delays in ATM to the desktop and the arrival of 100Mbps Fast Ethernet seems to have left it as a bit of a lame duck: noone really wants a 25Mbps desktop technology. VC: I think though that ATM in the wide-area could be attractive. I have several problems with the ATM implementations that I've looked at so far (we have something like 20 of them in our lab). Many of the products don't have adequate buffering to handle very large dynamic ranges of packet sizes. Sending a very large Internet packet causes many many hundreds of [ATM] cells to be created and somebody has to put them somewhere while they move out into the network. So there's buffering problems, there's absolute overhead questions (how many excess bits you send because of everything being broken up into small cells). There is the possibility of circuit-setup delay where you try to create a switched virtual circuit (SVC). There are so many parameters associated with [SVCs] that some of us wonder whether it will ever successfully negotiate the same parameters at either end. It is intended that ATM will offer different classes of service, they go by titles like ABR (available bit rate), VBR [variable bit rate], CBR (constant bit rate), UBR [unspecified bit rate] and so on. Each of those different types of traffic needs to be distinguished in the ATM switch -- it's just a packet switch when it comes down to it. We don't know frankly whether all those things are implementable in an efficient way. I am using ATM in our backbone right now because it was the only way I could get the Cisco routers to run as fast as they do. However, if I had direct IP over SONET or PPP over SONET I might be tempted to build a SONET-based system over an ATM-based system. But I don't want to rule out ATM in the wide area entirely, because it has worked very well for us and we've been relying on it for the last year or so. Gigabit Ethernet, ADSL and Cable Modems APC: Do you think that it may end up being restricted to wide-area services, particularly if the Gigabit Ethernet researchers are able to get a product out soon? VC: If anything it's kind of a see-saw. ... I think we're going to see-saw back and forth between a shared network that runs like a bat out of hell and you're happy, to the shared network that is overloaded because the computers we put on it are so powerful -- then we go back to switching techniques then we get higher speed sharing. So we just alternate back and forth. ATM isn't dead, ATM has a place in the architecture of Internet -- I just don't know how long it will have a place, whether it will cycle out and be replaced by something else. APC: What about ADSL [asynchronous digital subscriber loop]. Where do you see that fitting in? VC: I'm quite excited about it because some of the reports are that 1000-foot twisted pair could deliver as much as 50Mbps. I don't recall whether that was full-duplex or half-duplex, but it doesn't matter: even if it's 25Mbps in both directions, it's still pretty impressive. The only way to do that, of course, is to have a hybrid fibre arrangement where fibre comes to the kerb and you distribute traffic out on a twisted pair. It's looking pretty reasonable and so we have some trials going both in the US and in the UK with British Telecom. Although it's early days yet, some form of DSL, whether its ADSL, DDSL or HDSL or one of the others, looks to me quite attractive, and certainly a reasonable alternative to hybrid fibre-coax television cable. APC: What about cable modems? VC: I have two reactions to cable. One is that the one-way cable we have today might be a perfectly good medium for multicasting. In the most trivial sense, you can embed packets in the television broadcast or you could take more of the channel and do one-way multicasting at fairly high rates. In the short term, I'd like to see cable used as a one-way multicast technology. Longer term, of course, I do hope we'll have two-way cable capability; maybe we'll be able to use it as an alternative to ADSL or to conventional hardwired communication. Them and Us: A Lack of Symmetry APC: Do you see the lack of symmetry in the current architectures that are being deployed as a problem? VC: Certainly it's the reverse channel that's the hard one to implement. I've actually seen several different designs, one of which does involve an asymmetric allocation of capacity and others are a little bit more uniform. I think this depends a good deal on the applications that you're running: if it's a real-time Web browsing kind of service then low-bandwidth uplink is not too bad. If, one the other hand, you were the source of that uplink traffic (if you were a Web server) you would not like the slow back-channel, you want as much capacity as you can get. Utility of an asymmetric channel depends entirely on the applications that you're trying to support. Personally I hope we would get it all to work because I would not mind having a 25Mbps download capability! Internet Telephony and Economics of Voice over IP APC: On the subject of Internet telephony, one of the more expert journalists in Australia makes the argument that Internet telephony is not technically the most sensible thing but it's more a function of telephony tariffs. VC: I think we need to look at this both internationally and domestically. In the domestic US we do have a rather artificial tariff situation where the access charges for ordinary telephony are 2.75 cents per minute on each access line. A typical long distance phone call takes 5.5 cents per minute just to handle the two local exchange carriers at either end, plus whatever the long-distance charges are. When you do Internet [telephony], it turns out you don't have to pay those access charges you just pay a fixed price for a common business line. Depending on how hard you drive that business line (for dialup, or something) you will experience a much smaller absolute cost per minute than you would otherwise because you are not being charged per-minute rates as you would for ordinary telephony. So some of the cost savings associated with domestic Internet telephony are not the result of any special technology, but precisely the result of not having to pay tariff charges for access. On the international side it's more complicated. The absolute cost is quite high compared to the cost domestically and so there I can see a possibility for Internet to be dramatically less expensive. Actually realising such an outcome depends a good deal on what technology we get to use and whether or not the regulatory environment would permit any kind of cost-saving to be passed onto the customers. Here you start getting into international law and regulation where I'm not competent to even speculate. APC: Do you think that voice over IP makes sense technically, as compared to the existing phone infrastructure? VC: If the access charges were back down to cost and we charged access costs to everything including Internet, then the attraction of Internet voice on economic grounds would evaporate because the bulk of the advantage is showing up merely as access charge reduction. On the question of whether it's technically sensible, I think the answer is yes it can be. I like it for call-centre applications, like we were talking about earlier. When you get on the Net and you want to talk to somebody about something and yet you don't want to lose your place, having the ability to launch a telephone call out of your PC would be attractive. But not everybody is going to run around with a PC to make a phone call. So then we get into this question whether bypass (by which I mean the long-distance part is carried on Internet rather than on conventional telephony circuits) is important economically. For the domestic case, it's access charges that drive the equation. The primary domestic economic argument will evaporate and if anyone wants to do Internet telephony it will be because of the convenience of doing it while you're already on the Net. Possibly the combining of services, Internet and telephony services together, [leads] into a more interesting menu of things you can do. On the international side, I think that there will be strong attraction to doing Internet telephony. But all of these are going to be coloured by the actual absolute capacity that the network has to support real-time service, and if it does not have adequate capacity then service won't very good and people won't enthusiastic about using it. I personally believe in the long-term we will figure out how to do Internet telephony and it will be an attractive offering. APC: I guess that phone companies would probably be feeling threatened by a transfer of voice calls from conventional circuits to Internet circuits? VC: I suppose some of them may feel threatened. We are anything but afraid of technology. The right thing to do is to embrace it and figure out how to use it. It's just like the publishers, remember, they were scared to death of online publishing and they still have good reason to. APC: The argument is not so much that [telcos] are afraid of the technology, more that it's the economics that are skewing people to use voice over IP. Voice over IP being less efficient than a dedicated voice circuit, it seems it's the economics rather than technology that is skewing to favour voice over IP. VC: What will drive voice over IP is going to be convenience and new kinds of services that you couldn't build with either circuit-switch or packet-switch. In the case of telephony in general, I favour the argument that if someone's going to eat your lunch it might as well be you. When you look at the absolute current capacity of Internet, and even imagine it entirely being used for voice, even with very high-compression -- it still doesn't amount to a very significant fraction of the total voice system. However, in a few years' time we will have a very different picture; there will be enough Internet out there (and enough capacity dedicated to it) that you could handle an increasing amount of voice. I believe it will be an attractive offering. APC: Do you think that current economics are skewing people to use an inappropriate technology for voice? VC: Well, people who are trying to save money on international calls may very well try to use Internet to do that and not get grade of service because either there isn't enough international capacity or the delays are high. They're being attracted by price and it's amazing what people will put up with if you sell it to them cheaply enough. There is a price play there, in the long run though all of this has to balance out because the Internet can't handle a great deal of this sort of traffic without increasing its total capacity. Ultimately I think you end up with pricing strategies that are related to volume of use and it becomes a self-funding system. Today it's less clear. Going the Distance: New Charging Models for Hybrid Services APC: What is going to be the appropriate charging model, particularly as all of these services get integrated -- the voice and cable and Internet may all just end up being one cable to your home, ultimately fibre. What is going to be the charging model? [Carrier] companies aren't going to want to have to analyse every packet and decide what type of service it is [in order] to charge for it. Do you think that time could be the delimiter? VC: Actually I don't know yet, but we've been studying this question. First I would have to say that flat-rate services only make sense where the statistics support you. If it turns out that to be an unstable statistical usage then it's very hard to institute a flat-rate program because you can't predict very well on average what people are going to do. If you can't predict it then you don't know how much capacity you have to dedicate for their aggregate needs, therefore you don't know how to calculate what the right flat-rate charge is. I believe, after looking at this for some time, that unless we have a volume-sensitive pricing strategy that we will not be able to sustain the business as we keep increasing the amount of backbone capacity. I believe at least some traffic, especially the best-efforts traffic that doesn't have any special treatment, may very well be priced the way traffic is priced now. But the special treatment stuff (things that should be higher priority or whatever) will probably get special treatment and also special charging. APC: I saw a good point on a mailing list the other day: if we move to volume-sensitive traffic and there's no solution to spamming as yet, people are paying to receive junk. That's an encouragement to people who are marketing via spam to externalise their costs. VC: Internet is interesting because both sides pay for access to it; it's not quite like the telephone system where whoever places the call pays for the call. It makes it harder to analyse: the fact that both ends pay for some of it means that it's more like cellular telephony than conventional telephony, because when you call somebody it's costing that person some money. That will probably colour people's behaviour; I hope it will colour it for good, in the sense that we'll get rid of some of this junk mail that just comes. APC: Do you think that ultimately distance is discredited as a unit of measure -- in telephony particularly, but also online services? VC: I think that one might be able to conclude that for the domestic market, but it would be more difficult to conclude for the international market. The cost of the international circuit is so high that to completely ignore the cost and subsume it into average cost on the network would not be a good idea. I'm inclined to think that there will have to be at least some sensitivity to distance for other countries. Regulating the Industry: the Proper Role of Government APC: Where do you see the appropriate role of federal governments regulating the industry in general? VC: I prefer to see as little regulation as possible, to be quite honest with you. The area where I think the government has good reason to participate, whether it's a regulatory body or some other part, is standards-making. Plainly the government should participate in that, it help sponsor that, it should endorse the work of the private sector in creating standards. When these things become infrastructure, as the Internet is becoming, there may be public policy questions arising with respect to safety, for example, concerns for public welfare. We already are actively enforcing laws that are not specific to Internet, for example a fraud is a fraud whether it's done on the phone, in the Net or in the postal system. Copyright violations are quite real regardless of whether they are on the Net or elsewhere. There are some awkward situations in the online world where previous copyright concepts become quite difficult to address. Once one decides that something ought to be protected, then a violation of that protection is just as invalid in the Internet world as it would be in the paper world. APC: My view is that the current law can be extended, by analogy and by further legislation, to cover the new environment. VC: I would agree with you in large measure. There are a few nuances that are showing up in this particular environment that make it not ... cover all the cases that we have to consider. One of the awkward things is that, although the physical manifestation of the Internet does exist in the real world, somehow or other the way you work with it is insensitive to its location. So you get into these problems where you're accidentally on an international basis whether you intended that or not -- your Web page is accessible to everyone around the world. There are aspects of Internet that are not well enough understood yet to say what the right legal posture should be. But there are lots and lots of abuses that I see in the real-world which have their analogues in the Internet and I hope are equally prosecutable. Child pornography is illegal regardless of what [medium] it came in on. Harassment, for example. There are some interesting questions about libel and various kinds of tort. It's not quite clear under what circumstances somebody has damaged you. MCI has quite an active intellectual property and general legal team which is following this with considerable interest. How Can Regulatory Bodies Possibly Cope? APC: One area where I am interested in your reaction is the role of government in anti-trust and trade practices issues. VC: Once again I draw the line where public welfare is at stake. Anti-trust is intended to protect public service, [predatory] pricing for example and other practices that are not permitted. I think the government continues to have a role there in monopolistic practices; practices that are in restraint of trade or restraint of competition are not good and the government should step in. There are other areas where perhaps the government shouldn't be regulating, like pricing, I think it really ought to be the subject of a marketplace. But I could imagine the government having a role in protection of the public from ... predatory pricing. Where there are violations of behaviour that we would prosecute in other media one would expect similar effects in the Internet. However, there are places where it might not make sense; the seven dirty words that you are not supposed to use on the radio and television -- it seems to me some significant fraction of discourse on the Internet might disappear if they tried to enforce that. APC: Do you think that bodies such as the [US] Department of Justice, and the Australian equivalent [Australian Consumer and Competition Commission], are going to be effective in the long-run in managing these industries and managing anti-trust issues. Since the legislation was passed earlier in the year to deregulate the [US telecommunications] industry , there are all these mergers and vertical integration happening. Do you that the market is growing almost too complex to really understand the effect of business moves enough to regulate it? VC: I must admit that it's a very complex environment and it's changing quickly. I would not want to be in the shoes of the [US Federal Communications] Commission right now; I think if I were [FCC Commissioner] Reed Hundt I would be sitting there scratching my head saying: 'Gee I used to know what a telephone company or a radio broadcaster or a television broadcaster was. They were well defined things and worked in different media, now this damned Internet thing comes along and it does everything. I don't understand what I'm regulating any more.' I'm very sympathetic to that. Security: the Bugbear of Internet Commerce APC: Do you think that security is still one of the principal bugbears for Internet commerce? VC: Perhaps not in the sense that many have taken it to be. I don't consider the concern about losing your credit card number to be a very serious impediment to Internet commerce. Convenience will almost invariably win out over anything without introducing complex security behaviour. There is a desperate need for good quality security in the base Internet backbone. We get attacked on a regular basis by a lot people trying to interfere with operation of the Net. Having instituted a lot of special security features, our piece of the network functions [safely]. On an end-to-end basis I think users are interested in their own privacy and so there will be intense pressure, if there hasn't been already, to bring up secure email and things of that sort. That network needs to protect itself in addition to providing privacy to users is indisputable, in my view. US Encryption Policy APC: Do you think that the recent liberation by the US Administration of encryption export is a positive step? Do you think goes far enough: it does not permit 128-bit encryption? VC: I don't think that it's quite the right policy yet. For one thing there's an escrow component that's hiding in there, and I'm not entirely comfortable with the idea that the US Government has to have the authority over who can hold the escrow keys. You can imagine trying to sell this to a businessman in France, saying: 'Well this is really a great service, it's nice and secure and by the way when we assign the cryptokey we have to give access to it to the US Government'. That's going to make them feel very comfortable indeed! I'm not very happy about the government's position on cryptography. Moreover, my understanding is that there still is a desire to restrict the maximum key length. Quite frankly I think the business community more recognises the need for high-quality crypto than even the government does. The key sizes are going to have to be big enough to withstand attacks for some time, for some number of years. Security: Psychological or Technical Issue? APC: Do you think that security is fundamentally a psychological issue or a technical issue? VC: It has enormous technical components to it. I think it's a technical problem, I think it's a real problem. When you say 'psychological' it sounds as if people really don't need it, they just think they need it. My reaction is not only should they think they need it, they do need it. I think it's quite vital that people accept and endorse the need for good security practices. APC: People seem excessively paranoid to my mind about credit cards via the Internet where they'll happily give a credit card over a phone line. VC: Exactly. Here I think the only reason that's become a cause for alarm is that people imagine these programs lurking in the network sucking up everybody's credit card. It's fair to say that there is more potential for that networked than would be the waiter taking your credit card. But as point out, you don't know when you call on the phone whether somebody is real or not. It could be just a corporate front to fraud, taking your credit card number then going off and abusing it. I agree with your argument that people out to be just as concerned about the telephone as they seem to be about the Internet. APC: Do you think that's only a matter of time and confidence? VC: Yes. I also believe that we will get suitable cryptography in place to give added comfort. Why Java's Not the Most Exciting Technology on Net APC: On to Java: do you think it's the most exciting technology you've seen on the Internet? VC: Well, no, but I don't mean to damn Java particularly. I have been working on knowledge robots since 1986 and so the basic idea of having mobile software and an interpretative platform is old hat. I'm excited about the whole idea in fact and I'm insensitive as to whether it's Java or Visual Basic or some other language. I do like the idea of being able to outfit a server with new functionality or outfit a client with new functionality on the fly. When you are out there surfing the Net, you pull back an object from some database that your software has never seen before, you can deliver along with the object the software that can be used to interpret it. That can be quite an exciting capability. It also unlocks the possibility of being able to protect an object -- a chunk of HTML or what have you -- by not permitting access to it unless the proper credentials are shown. That gives an added measure of protection when you are concerned about copyright management, for example. I don't want to go on record as being unenthusiastic about Java. I actually like it a lot, I just don't know whether that's gonna be the language of choice. At the moment it's quite popular, but I don't think I have seen very many substantive applications as yet and it's the applications that will be the final driver in whether it becomes a successful part of the Internet. APC: To what extent do you think it provides assistance for the age-old problem of knowledge robots and intelligent agents? VC: It conceptually has the ability to do this, but I worry about performance because of the fact it's an interpretative language. I'm told however that there are compilers coming or already in existence that may solve that problem. Then comes the worry will the software program actually do what it's supposed to do, or does it do things it wasn't supposed to do. In addition to it performing the function you wanted it to do, does it copy your email and send it to the Washington Post? And of course that's a hard question to answer, there is no general answer. I am enthusiastic about the idea of having a common interpretative platform, and nervous about performance and security. APC: To what extent do you think it enables independent software actions? VC: It actually appears to be quite reasonable for that. A persistent Java program could certainly run for quite along time all of its own. APC: To what extent do you think it needs to fit into a larger model such as CORBA to be really useful on that knowbot front? VC: Let me separate this into the generic questions since I'm nervous about CORBA. But my nervousness is not based on as deep an appreciation for the details as I should have to hold a legitimate opinion. CORBA feels a little on the large and clumsy side to me, it makes me worry like X.500. It feels a bit on the sluggish, clumsy, not too easily applied side, in which case I'm not enthusiastic about it, but we do need to have some common object structures in order to have mobile software that can successfully interact with objects that they create. I don't know if CORBA is going to be it, but I am favourable to the basic idea of having a common object model. APC: Do you think that ActiveX may get the nod because it provides a more understandable, ad hoc approach? VC: To the extent that Microsoft penetrates an enormous fraction of the market, it may very well just de facto wind up being the method of choice. Network Reliability: It's Still Not the Phone APC: In terms of network reliability, we haven't yet reached the stage where the network is as reliable as the phone network, where you just pick it up and it works. Cable TV networks have similar reliability. Do you think that is a way off? VC: There's still a lot of investment to be made in order to achieve the kind of reliability that we shoot for in other infrastructures. Certainly in terms of reliability on a global scale where every network has taken a posture that is aimed at reliability everywhere, is I think along ways off, because of the number of different organisations that would have to decide that they're going to invest in that kind of redundancy. On the other hand, however, pressure will build dramatically for reliability as people become more and more dependent on the service. So I'm frankly looking forward to that pressure because I would like to make our part of the Internet as reliable as possible. APC: We were talking before about virtual circuits or actual circuits between different businesses; do you think that we might see more peering between Internet service providers as a means of redundancy and better network performance? VC: Certainly we are seeing more and more direct peering as opposed to use of maps as the interchange, and yes I do expect to see more of it. Maps turned out to be bottlenecks in large measure, and I think their utility is slowly eroding. IPv6: Now or Never APC: On IPv6 or IPng, when do you expect that it might start to appear and become the standard on the network? VC: We are testing now some IPv6; Cisco has released some software that's not production yet. Other organisations like FTP Software have announced IPv6 compatible stacks. My guess would be that we would start seeing some limited deployments in 1997, ... not first quarter, if anything second or third quarter. I'm eager to get on with it though, partly because the sooner we can get some into the Net the sooner we can discover where it doesn't work so we can fix it. Second, I'm concerned, as we all are, about the possible running out of address space. We'd like to get the new solution in place as quickly as we can. It's not going to be an easy process. APC: What do you think will be the key problems in that transition? VC: There's sort of a horse race, of course, in backwards compatibility, how you refer to an IPv4 address. You can make IPv4 sites and IPv6 sites interwork, except for the fact that they have different physical address formats -- that only works so long as there's direct mapping from IPv4 to IPv6. As soon as you run out of IPv4 space then it doesn't work anymore. That's why I am eager to get it out there and get it in place, apart from the fact that it also has the flow capability and we can get to say how that works. Copyright and the Distribution Medium APC: On the question of copyright and the distribution medium, I saw in your 'Letters from the Future' that it seemed to envisage the concept of information brokers, in my reading of it. But that may be just my perspective? VC: No, I think that you picked that up quite well. I was actually thinking of knowledge robots that would be repositories of data where information is stored away along with copyright information, terms and conditions. I believe that's coming rather soon and the reason I feel that way is that there's already on ongoing pilot at our Library of Congress. CNRI has a research program under way with the copyright office of the Library of Congress, and they have actually brought up an electronic copyright management system. I had participated in the design of that and that's why it had coloured some of what I had written in the letters from the future article. So I am expecting to see some serious efforts to put that into production use in 1997. Censorship at the Periphery APC: On the issue of censorship, you talk about leaving censorship at the periphery. Can you elaborate on that? VC: Yes, what I was thinking primarily is that it's at the end-station, the client side -- if anybody's putting some filtering, that's the right place to put it. I also don't like government censorship; I'm strongly in favour of Senator Leahy's view of all this which is that it's the job of individuals, parents and teachers, those who feel they have the responsibility and the authority to make decisions like that. I don't believe our government has the authority to make such a decision. I will draw the line, however, at some places where a practice is so abusive that it is acceptable to make it illegal -- child pornography being an obvious example. APC: Do you think that globally there is enough consensus to implement global censorship on those sort of issues. When the Internet grows in scale, it's taking into account Moslem and Hindu societies and all sorts of social values the world around. Do you think there is any way for global censoring? VC: I think it's going to be very difficult to have a common view on censorship. Some practices are so awful that they are considered awful in every country. ... It's hard to imagine any culture where abuse of a child is considered acceptable. But you're quite right, there are other practices that are acceptable, for example in the Netherlands, where they might not be acceptable in the United States. One then has to ask how do we deal with this. ... It's better to provide tools for the users at the ends to protect themselves. If you don't want to be exposed to some material you ought to be able to tell your browser not to connect to anything labelled 'porn'. APC: From a technical viewpoint, do you think there is a capability to implement global filtering or censorship at the router level or somewhere in the network? VC: I think the answer is yes but here I am thinking of third-party filtering. How it would work would be that you pass your URL to somebody who's got a filter server that says, 'oh yes I've seen this URL and I rate it X'. You send back the opinion that it's rated X then the browser decides whether to display it or not. Plainly there will be differences of opinion among different groups as to what's X and what isn't X. There are also going to be some complexities when in one country something is considered acceptable but in another country it's not. ... In a sense you want your filter to work regardless of the country of origin. The hard part here is that the Web is not the only thing on the Internet, there's email and file transfers and the like, and if you are serious about censorship you have to be prepared to go along and do something funny to all of that. That's pretty horrendous and I wouldn't want to contemplate it. I don't believe in censorship particularly, certainly not by the government anyway; but I accept that parents have a legitimate responsibility and we should provide technical means of discharging that responsibility, but it ought to only apply to those people for which that adult has responsibility. I don't expect my neighbour to be in control of what I read: I don't think my neighbour has the right to do that and I don't think the government has a right to do that. Embedding Technology in our Lives APC: We are seeing networking devices in every aspect of our lives. I'm wondering what you believe will be the social impact and social changes that will result from that change? VC: Scary as hell! I have a personal anecdote to tell about one of the scrambles for today, and it in fact illustrates how dependence on these things can become quite risky. I am hearing impaired, I wear two hearing aids. They are of a type made by AT&T (of all things) and they do not have built-in controls: the control is an external device that uses an ultrasonic signal to change the program and change the volume of the hearing aid. The control broke today, it was not functional after fiddling with batteries and polishing the contact and everything else. In finally tried to track someone down on a weekend to supply me with a substitute. It took a while to find somebody because it turns out the devices are programmed and so they've serial numbers programmed into the controllers: it's only supposed to control the hearing aid that you're wearing, not your neighbour's hearing aid. You can easily see how the engineers wound up with this design because you don't want one guy setting off this little gadget to impact everybody in the building. The trouble is that meant I had to go find somebody who could program a backup unit with my serial numbers. We found it, it got all worked out but I thought that's a fairly complicated thing compared to just going and buying another one. That's just a tiny example of how your earlier questions about security and reliability are vital to our ability to incorporate all this stuff successfully into our day-to-day lives. The scary part is that we're doing it anyway -- whether we like it or not people are beginning to rely on the presence and availability of the Internet and when it isn't there it actually causes trouble like that horrible 19-hour outage that AOL [America Online] had. I feel really bad for them because I could easily imagine that happening to anybody, including us. APC: There was the story here about the outage caused by rats at Stanford University. VC: Yeah, another good example. Now that one I went and checked to see how we were doing on backup and I have motor generators all over the place. It'd take a lot or rats to knock us out of business. We're a telephone company and we understand what reliability means to people. APC: Phone companies do have all of these electricity backups and so on that people in the Internet are only just starting to think about now. VC: That's true, although I have to say that the real issue that comes to my mind is not as much the power or something as basic as that. It is the sheer complexity of the system and the incredible things that can go wrong if you put the tables together incorrectly or configure something incorrectly. The fragility of all of this is truly scary. Anyone who knows anything about the air traffic control system or the banking and brokerage system should be shaking in their boots about how deeply we rely on things that we don't necessarily understand fully. Perhaps I am overreacting -- the Internet is no worse than any of the other major electronic infrastructures that we depend on daily, but I know more about it and the others scare me more. The Internet's MTV Generation APC: What do you see as the impact of embedded systems which you have called 'ToasterNet'? (I thought that was a great name.) VC: I am really excited about it, although I must say today I learned the flip side of it which is having a computer embedded in my hearing aid which requires another computer in my control unit and a protocol to talk. The effect is something very good which is that my hearing aid isn't screwed up by my neighbour's control, but on the other hand it means that it's more complex to deal with when things go wrong. I'm still very excited about programmed control partly because you can make the devices far more intelligent about what they do than one has today. There's this opportunity for remote control which is attractive although it's also a bit of a risk factor too because somebody can go reprogram your house while you're on vacation. That's sort of a daunting thought. My personal feelings right now are overwhelmingly in favour of the flexibility of software when it comes to biting off bits of computer power and making devices adaptable to my needs. APC: I always think it's has the makings of a great B-grade horror movie, 'The Attack of the Killer Toasters'. What do you see as the impact on our children's generation: if the current generation is the MTV Generation, affected by unforeseen effects of television, what do you think might be the impact of these technologies on the next generation? VC: I think our intuitions are always poor when it comes to the law of unintended consequences. I would say that at least this generation will become much more accustomed to convenience and time-independent communication. One could have said the same thing of letters and the postal service, except it's the real-time and somehow the convenience of being able to just whack off an email message to a relative. It's beguiling. It helps me maintain a much larger circle of friends than I could have otherwise. It keeps me in touch with friends and family (pardon me from using that ['Friends and Family' is the name of an MCI discount calling plan]). I have found it to be striking, just absolutely striking, in its power because I now maintain or have renewed friendships with people that I haven't seen for 20 or 30 years. The other thing, and possibly the most powerful one, is the way in which the Net supports a kind of serendipitous communication. One doesn't know with whom one might be communicating when you post on a discussion group or put something up on a Web site and have someone find it then get in touch with you. It is this discovery of others with similar interests that I think is an incredibly powerful force. I think we probably don't fully appreciate the richness of our interaction or our potential interaction now. New ideas, inventions and things of that sort mean that the true meaning of serendipity can emerge from this unexpected encounter with an idea or person with whom you share some interest. It's probably that area which I think will exhibit the most profound impact. Then of course I'm sure I don't know the one thing which will be the most significant impact because I am not smart enough. Real Cyberspace in our Lifetime APC: Noone would have predicted the MTV Generation in the 1920s. Do you expect that within your lifetime you might expect to see cyberspace of the kind William Gibson envisages in his books? VC: Perhaps not quite in the same dramatic fashion that Gibson predicted. By the way I would urge you to look at Vernor Vinge's work True Names if you can find a copy. It preceded Gibson's Neuromancer by a considerable period. It raised very similar visions of a cyberspace world populated by these metaphorical dragons and beasts and the like that were simply manifestations of programs. I think the answer is that in my lifetime -- I hope I can count on having 10 or 20 years -- that yes we will see something like that. Not quite in all of its dramatic glory (or gory, perhaps), but I think we will see more three-dimensional presentations. We will see virtual representations of a world which doesn't exist in the physical sense but which has reality in its virtual world, [and] is interconnected to the real world. Here I want, as strongly as I can, to make a point: the virtual world of the Internet can be and is embedded in the real world. We can interact with real things, with real systems, on the Internet even though we may perceive them as part of a rather virtual place. You can imagine creating what looks like a bank lobby and going in and interacting with a simulacrum of a person only to discover that you've actually smoothly moved from interacting with a program to interacting with real person on the other end of a call centre that you're now talking to through the Internet. Divorced from Responsibility APC: Does that level of artificiality perturb you? VC: No, actually I get very excited about it because it allows us to create places that might not be possible otherwise. I like the idea of creating virtual classrooms that can be populated by kids around the world. While they are conceptually in the simulated environment when they take actions, the actions can have some real impact in the world. There are a number of scientific instruments that are connected to the Internet already. So people can actually create what looks like a virtual place and have an interaction with the virtualised environment only to discover that you're actually controlling some real-world system. APC: Does it worry you that divorcing people from the immediate consequences of their action may have unintended negative effects of people's behaviour? VC: I take your point. Yes, that does worry me and I have gone out of my way regularly to try to reinforce the idea that actions taken in the Internet also have real-world consequences. There are real people on the other side of the network. Flaming somebody because you're protected by your screen is no better than behaviour we see on the roads when people get into their cars and suddenly become monsters because they're protected by this glass envelope. There is a real risk, that risk is partly a consequence of perceived anonymity. On telephone calls you can misbehave horribly if you haven't identified yourself to the other party. So perhaps what one looks for here is the return of responsibility by the removal of anonymity. Next Phases for the Internet APC: What do you see as being the next phases for the Internet? VC: It has only penetrated in the US about 10% of the market so plainly we have quite a long way to go in terms of getting the Internet accessible to a larger fraction of the population on both the business and residential side. We have an enormous amount of work ahead of us to make effective use of the technology in the school environment. ... If you don't have continuous access to these computing resources they're damn near useless. Imagine that you'd just invented pencils and paper and there aren't very many of them around so you have paper-and-pencil training 15 minutes a week. Just imagine how useless that would be compared to being able to write down things whenever you needed to, 24 hours a day. We have this stupid situation now where people don't have enough computing equipment in the school environment. Kids can't take the machines home and they can't really learn to use these things as daily tools where they have access to them whenever they need them. That's one big phase change that's going to have to take place, where we get this kind of equipment literally everywhere, and in everyone's homes so the kids have continuous access to it. A Quantum Leap for Search Tools? APC: Do you foresee some sort of quantum leap in the tools available for seeking and finding information on the Net? Currently there are things like AltaVista and Yahoo but it's very time-consuming to locate things. VC: I hope so. I certainly found AltaVista absolutely spectacular in terms of what it turns up. I think, though, that's just the tip of the iceberg. People need much more focused and more organised information. It's fun to go Web surfing and the tools help considerably to at least help find things that are in the area of interest that you might have. But when you are looking for a particular thing, it would be helpful if we had more organised views of content. A lot of work needs to be done before we get there. APC: Do you think AI [artificial intelligence] has a role to play? VC: I think it does, but right now I am relying on the librarians and their 2000 years or more of experience trying to organise information. They all were worried that they would have no jobs to do because computers would take over: I beg to differ, I think we need them now more than we ever did because of the vast quantity of material that needs to be more thoughtfully organised. Networks and People with Disabilities APC: What are your thoughts on the potential of the Internet to benefit people with disabilities? VC: It's very powerful. I have spent, as you can imagine, a fair chunk of my time trying to persuade people with hearing impairments to make use of electronic mail because I found it so powerful myself. It's taken ages because they've all been accustomed to using these damned TTDs (teletype derivatives). There's a good example of the law of unintended consequences: AT&T and Western Union tried to do the deaf community a favour by supplying them with these things free of charge 30 years ago. Now there are a million or three of them out there and now they have an installed base that they're not willing to shift away from, and yet they work horribly with computers because they're not using proper modems or anything. I have seen some very dramatic results as a consequence of computers and technology for people with various impairments. There's an organisation called the National Christina Foundation that makes a practice of matching up people with disabilities and people who have supernumerary computers so the two can share. One of the best examples of the power of computing to help people like this is a story about a young child with muscular dystrophy. She had very poor motor skills, so trying to write something with pencil and paper was extremely difficult and of course many mistakes were made and it was just as hard to erase as it was to write. This child was given a Macintosh and learned to use the keyboard, not necessarily in an efficient way but this is a dramatic improvement because mistakes were easily corrected. The child became far more fluent learning to read and write. If you can't write you're not motivated to read and vice versa. ... I think it has a tremendous equalising potential for people with disabilities and I look forward to seeing it employed more and more that way. Father of the Internet APC: How do you feel about the mantle 'the father of the Internet'? VC: It's an awkward sort of situation because as you and I both know there have been thousands of people and dozens have been key players. ... I had a colleague who did the basic design work with me, Robert Kahn, but after this was done I was the guy with the lead responsibility for developing the technology and leading the R&D program, ultimately ending up at ARPA spending until 1982 with that responsibility. For nine years, from the beginning of the research in 1973 until I left ARPA, I felt very paternalistically responsible for it. So in some sense I can justify the label. I didn't seek it, the problem is that the press needs labels and I think our culture likes to have heroes. I don't fully understand it but I accept that we have a culture that needs to have spotlights on people. I know that some of the important contributors and friends of mine are not happy with people who say that, and I don't quite know what to do about it. When I explain that I am fully conscious of the contributions of everyone else and I would be happy to be known as one of the fathers of the Internet, it doesn't work. I've sort of given up and I accept the fact that labels stick. The people who are bothered by it will always be bothered by it. APC: Vint, thank you so much for your time. I have very much enjoyed talking to you, it's been great.