ICIN 2013, Unlocking Value from the Networks

Notes by Warren Montgomery

 (wamontgomery@ieee.org)

 

ICIN 2013 was held in Venice Italy, October 14-16, 2013.  This is 17th ICIN, a conference which started in 1989, and  the 12th ICIN I have attended.  The focus of this conference has evolved over the years, but continues to be on services in communication networks, this year focusing especially on WebRTC and Virtualization.  The conference took place in a Telecom Italia conference facility in the heart of Venice, 2 blocks from the Rialto bridge.

 

Some major themes from the conference included:

  • WebRTC is coming and is expected to have significant impact on the competition for services.  VoIP has enabled “Over The Top” competition for traditional telecom services for over a decade but WebRTC goes further in enabling it.
  • Voice is becoming a service capability, rather than the service itself.  For most of the history of telecommunications, the focus has been on the Voice Call as the dominant service.  WebRTC treats Voice as an aspect of a service (e.g. chat with an agent to complete a transaction) rather than a complete service.  Concurrent with this there is evidence that younger users in particular are turning to text and other forms of electronic communication in preference to making a “phone call”, contributing to a decline in revenue and in some cases in minutes as well.
  • Virtualization creates new alternatives for service architecture.  Virtualization can be applied to the network infrastructure for cost savings or the service components themselves, further decoupling processing from physical location.  It’s not yet clear what the winning approaches will be.
  • Competition for service customers takes many forms, and the best approaches will no doubt vary according to the local situation.   We saw several innovative services presented from different areas of the world, but the appeal is almost certainly dependent on local culture, regulation, and competitive environment.

 

Some Observations on the Conference

 

One general comment and an apology.  I was unfortunately less attentive this year than in the past because I was attempting to clarify whether a planned transit strike would impact my travel plans and make whatever alternate arrangements were required.  This proved surprisingly difficult, even with good communication and information sources, because networks can only deliver what people tell them.  As a result of this, and my subsequent travel, my notes are late and sketchier than some times in the past.

The Size and Scope of ICIN

This conference was essentially half the size of the last one.  There are many factors that probably contributed to this:

·         Continuing downsizing in the industry, especially among traditional carriers and major equipment vendors, who have been the core of support for the conference

·         A narrower focus in the call for papers, which may have discouraged some submissions

·         A venue that was very nice, but expensive and somewhat difficult for many to get to, and had no local base of potential attendees.

·         Continuing difficulty in engaging next generation companies in the conference.  Some say it’s not in their culture to attend conferences in the same way as the traditional vendors and carriers did, but Hagen Hultschz probably put it best as a metaphor for the industry:  We tried to engage startups and next gen companies, but they don’t want to work with us.

 

The bottom line for me though is that the set of people involved in organizing ICIN (The technical program committee and the International Advisory Board) is still largely the same community that I joined a decade ago.  They are enthusiastic and active, but like me many have retired or taken other positions where they no longer work with large numbers of colleagues who are potential presenters or attendees that they can recruit to the conference.  Even those still working with the same large companies generally find it much more difficult to send people to conferences.  The result has maintained the quality of the conference, but not the quantity of material and attendees it once drew.

 

A Note on the Venue

The venue was a Telecom Italia center in Venice, a renaissance building that had been refurbished as a conference hall.  Because of the size of the conference, there was only a single track of sessions and all were held in a common hall.  This was perhaps the best equipped conference hall I have ever experienced.  Each seat had a local video monitor showing the presenter’s slides, a microphone for questions, and power and internet access for personal devices.  Curiously I found that some of this diminished the interaction:

  • Attendees watched their own screens rather than the speaker, in part because the common screens were difficult to view because of the hall lighting.  In doing this they fail to catch the non-verbal communication of the speaker.
  • Questions asked over the microphone were clear, but it was difficult to tell who was asking the question unless you recognized the questioner’s voice.  The disembodied voice came out of the audio system but it wasn’t clear who was talking, and again you missed non-verbal communication, and making it difficult to identify who to follow up with.

 

Both are really attributes of most conferencing systems where the participants are remote and perhaps indicate continuing challenges in using technology to replace physical closeness.

 

ICIN Tutorials

This year I got to attend both tutorials, one on Network Virtualization, and one on WebRTC.)

Network Virtualization (Igor Faynberg, and Hui-Lan Liu (Alcatel-Lucent))

 

This tutorial covered several aspects of the virtualization of network functions. 

 

The problem:  In the classical network approach, new functions are encapsulated in physical products, so getting new functions requires buying and installing – that’s a long process because of certification, and capital budget. 

 

The solution:  Virtualization, where functions are encapsulated in software supplied by independent vendors that can be placed on “commodity” storage and networking. 

 

  • Separates the capital intensive process of acquiring capacity from the acquisition of new functions. 

 

History:  Igor described a series of presentations and conferences related to virtualizing

Network functions beginning in March 2012, which resulted in the formation of an industry specification group around network functions virtualization, and ending with the publication of a carrier whitepaper early in October 2013.

 

To illustrate the concept that virtualization isn’t new, he showed a picture of a gondola and a canal lined by shops and asked where it was.  (I supplied the answer – the Venetian hotel casino in Las Vegas where I once attended a conference – the fact that everything is sparkling clean, including the water in the canal, is a dead giveaway :-)  He also cited the VM/370 work in 1972 as another example.  (Comment – he’s right, IBM introduced the concept of a virtual machine in which the software in the program thought it was running on a raw machine in part as a way of moving applications designed for older hardware and dedicated systems forwards into a new architecture which imposed an operating system between the application and the machine to allow the machine resources to be shared.)

 

The motivations for virtualizing network functions include:

 

  • Saving the cost of dedicated machines
  • Using otherwise wasted capacity.
  • Creating test environments at low cost
  • Migrating applications to new hardware at low cost.
  • Isolating a specific appliance (e.g. a server) for a specific purpose (like security) without buying new hardware.  (Comment – yes, but you have to both have high trust in the virtualization software, and/or not be vulnerable to overloads of shared resources (e.g. denial of service attacks against the virtual machine’s IP address can choke the real platform’s internet services))

 

He showed a scatter plot of network functions on two axes:

  • How much cost is to be gained through virtualization.
  • How much control is to be gained by doing this.

Core routers are  near the low end on both,  application on the high end on both, but other functions scatter across this characterization.

 

Some of the challenges in virtualization are:

  1. Performance (especially guaranteeing performance for functions with specific requirements
  2. Migration from legacy systems, which may assume physical location.
  3. Managing and integrating many systems while ensuring security both from attacks and configuration errors.

 

The Industry Specification Group is an ETSI vehicle that is lead by network operators to address technical issues, but does not develop new standards.  It’s there to coordinate input to existing standards processes to address gaps.

 

He gave a series of definitions of key terms – network functions, virtualized network function, infrastructure, network point of presence, etc.  The process used 9 use cases to illustrate network virtualization.

 

At this point Hui Lan presented the details.  She showed a slide showing how network control is really a multi-function chain, and that in showed the participation of functions. 

 

Igor  then came back to talk about virtualization of networks and CPU.  He started with a discussion of memory management and virtual memory.  He went through some definitions about virtualization – type1 and type 2 hypervisors depending on whether they run on an operating system or elsewhere.

 

To be virtualizable, the application can’t contain machine instructions which change the state of shared memory or other shared resources.  Some applications aren’t virtualizable by this criterion (e.g. those written for a bare x.86 architecture.)  You can address this by binary rewriting, which inspects the instructions,  identifies the problematic instructions, then rewrites them to trap into the hypervisor where they are emulated.  Paravirtualization does this at compilation time. 

 

Hardware assisted virtualization uses hardware to do the inspection and trapping processes.  (Comment – some architectures make this a lot easier than others, and it’s a shame that the winners in the marketplace are sometimes  the ones which present the most problems)

 

Intel has introduced a new set of capabilities to assist virtualization, with new instructions limited to the “kernel”, to do this.   AMD has done this as well, but it’s not standard.

 

Xen (A vendor of virtualization), has two domains, O and U.  O hosts an operating system (typically Linux) that handles the devices, while domain U has other applications, which have their I/O requests trapped and handled by the guest operating system in Domain O. 

 

 

Open topics include:

 

  • Software licensing (who owns it?)
  • I/O performance (is it adequate?).
  • Security (does encapsulation of VM really contain security?)
  • Availability of data to a machine at a given physical location (i.e. the legal matters that arise when things have to be in particular locations).
  • What may and may not be guaranteed in service level agreements.

 

What about the network virtualization?:

 

There are existing standards for VPNs at two levels (layer 1 and layer 2).  To use this with network functions we need to insure the separation of control and data planes (Software Defined Networks (SDN) and an IETF process).

 

How to virtualize storage:  Unix (and it’s contemporaries) changed the view of files from a physical view to a bit stream.  Later, that was generalized to include the concept of physical distribution and redundancy (RAID) underneath that bit-stream view).  Storage virtualization separates access from location.  Block virtualization separates at the level of disk blocks or sectors (handles systems that don’t share the “stream of bits view?)

 

Hui Lan talked about the application of this technology to real problems in cloud computing, starting with some motivations (scalability is the big one).

 

She talked about how management of the virtual environment fit the traditional role of network and service management systems and how they fit into an architecture, highlighting new functions needed to serve the needs of a virtualized environment.

This includes the normal managing of virtual machines, but monitoring the lifecycle and health of the functions and re-starting ones that run into trouble.

 

Igor talked about Big Data:  “The data that exceed the capacity of the conventional database Systems” (Comment – a little like defining Artificial Intelligence as programming that can’t be done through conventional techniques).  Big Data is defined by 3 V’s – Volume (size), Velocity (speed of access), and Variety (things that are beyond the usual Relational model).

 

He talked about the Mapreduce operation from common lisp as an example of how to process large databases.  This takes a single operation and maps it to all items of a list, combining the results into one entry (e.g. sum a group of elements into one).  This is a parallelizable expression of what is normally a serial process.

 

Hadoop (from Apache) was an implementation of this.  You do this by breaking the list into splits, each of which can be sent to a different server, then combine the result of the splits.  For this to work on a cluster configuration though, the implementation has to understand the configuration to make the right number of splits and mimize the amount of information that gets sent between server racks.

 

Many projects that are using this approach for various tools for working with “big data”.

 

WebRTC (Dean Bubley and Tim Panton)

 

The WebRTC tutorial was presented by two consultants, Dean Bubley, and Tim Panton, both from the UK.  Dean presented the context and overall approach to WebRTC, while Tim presented the technical details.

 

Dean began a brief history of how things that used to be independent, like media browsers and chat, got absorbed into the web browser (Comment – yes, but not always, and not always smoothly)  He presented WebRTC as the same approach for voice and video communications, like Skype is today. 

 

One motivation for this is easy integration of voice with other on-line activities – i.e. instead of having “push to talk to an agent” call using your phone, it happens on your PC.  (Comment – yes this is clearly easier, but it requires the user have their audio capabilities on and properly configured, which in my experience is often not the case for PCs.  Perhaps this is more likely to work on Tablets and other true mobile devices).

 

WebRTC has lots of backing from internet companies, carriers, and Telco vendors.  Attendance at workshops, standards meetings, and conferences focused on this has been growing rapidly. 

 

One distinctive feature of WebRTC is that it has no predefined signaling (left as an exercise to the developer?)  The commercial ecosystem work on business cooperation is outpacing the technical standards.  (Comment – I’m not sure how this is workable without some standards for signaling.  Perhaps what’s really happening is that signaling standards will come later once it  becomes clear what wins in the marketplace, which might be a better approach given what happened in many other cases, like Instant messaging or even VoIP, where the early standard wasn’t necessarily a winner in the marketplace)

 

Voice and Telephony were closely associated in the past (Voicemail, conferencing, and other non-telephony uses of voice  made up only a small part).  In the future, there will be much larger applications for Voice that aren’t phone calls (social networking, surveillance, gaming, etc.)  Voice is moving from a service to a function.  (Comment – this is interesting.  One of the things that I found intriguing when I first entered the telephony world in 1978 was that the phone call was the whole service, it wasn’t just one application that could be invoked by a user.  This perspective of putting the “call” in the center really created problems in extending architectures designed for telecommunication to applications that were very different.  I rapidly became a contributor to that view, but in retrospect it’s clear that it was limiting)

 

A Telephone call has many drawbacks.  A major one is “Hegemony of the caller” – the caller is in charge – you have to answer the call at their schedule.  This is one of the things that drives people towards other, less disruptive ways of communicating (e.g. text).  A telephone call had the objective of “the next best thing to being there”, but the objective really should be “better than being there.”  (Comment – interesting, there certainly are ways to communicate that improve on “face to face”, but also many ways in which electronics still fail to match “being there”)

 

What’s the role of video?:  It needs to have a clear purpose, and it’s not everywhere (e.g. not while driving).  There are a lot of issues around ergonomics, social norms, etc. that basically mean there has been little pull from the end user for video.  Instead, video has a niche where someone can essentially drive the other participants (e.g. interviews, surveillance, any application where recognizing the other person is important).

 

The focus of telecom on QoS, interoperability, and minutes and messages doesn’t fit.  QoS requirements vary.  Interoperability is essential for only the lowest common denominator service, and the value is in intention and outcomes.  “Telephony is a dumb service because it doesn’t recognize the reason the call is being made and that some calls have more value to the end user than others”

 

He raised the question of why users make telephone calls at all, as something to think about during the following presentation on the technology.

 

You can make a WebRTC call from the site http://phono.com  (It’s a demo site and I believe still live)

 

Why are there so many standards for web calling?  Security, variability, and programmability mean a web browser is a much less standard environment than the telephone network.  NAT (and IpV6 and IPV4 interoperability) are other reasons for complication.  Some solutions are only partial (STUN, and ICE work 80% of the time to get through firewalls and NATs, but sometimes that fails, and TURN is a workaround for those cases (uses a server in the network that reflects the packets).  (Comment – this is one of the things that traditionally distinguished telecom and internet – the internet is a best effort service, and there are layers of recovery procedures, each more resource intensive and/or less desirable, that cover the cases where the layers before fail.  Telecom traditionally designed the lowest layer to be 100% compatible and thus everything above can assume it works.  That’s nice, if you have complete control, The internet wasn’t designed that way.)

 

He described “sharefest.me”, which is a service designed to let two users share data like dropbox, but it discovers if you are both local to some network and sets up the connections peer-to-peer to avoid bandwidth and cost issues of reaching a common server.

 

There are several different kinds of Codecs for WebRTC.  mu-Law, the solution used in telecom networks is narrowband and fixed bandwidth and has trouble with lossy and variable delay networks.  An alternative, OPUS is wideband and can adapt to variable bandwidth and easier cope with network impairments.  (Comment – even the definition of what “voice” means gets involved here.  From the early days of telecom, “Voice” meant essentially a 3.5 Khz bandwidth, because that’s what analog technology could reliably deliver to endpoints connected over copper wires with only passive components.  That limit resulted in a standard – Toll Quality – which essentially requires a voice path to deliver only that to be considered “perfect”.  Mobile phones, which could not at first deliver this quality essentially further lowered the standard for what users perceive as “good” in a voice call.  The internet never suffered these limits, so expectations for audio associated with the web may indeed be more like “CD Audio”, which effectively reproduces the all the frequencies that can be heard by a human being.)

 

The W3C APIs specified as  JSEP”, have the ability for the application to grab the camera and microphone in a device supporting a web browser.  (He had some sample java code to create a peer-to-peer video connection.)  This allows web applications to create audio and video connections, which is the basis for WebRTC.  He demonstrated a lot of services for voice and video from Phono.com, a web site constructed to demonstrate these services. 

 

He showed a lot of potential use cases for WebRTC, including browser to browser, browser to telco (IMS and PSTN), browser to IP PBX, browser to contact center, and conferencing.  He also discussed the opportunities for vertical application areas (e.g. Healthcare, distance learning, etc.)  The vertical application areas generally carry more specific requirements (e.g. security, QoS, etc.)

 

WebRTC is not a new concept.  OneAPI from GSMA was probably the most familiar example of an attempt to standardize APIs for real-time communication,  but there were ones from Skype, Teamspeak, and others.  The problem with these approaches is that they tend to be expensive, inflexible, and have poor developer support.  The key problem is the “phone call” orientation. 

 

He went through the status of why WebRTC has momentum again, (simlar to the earlier items): 

  • It’s like Skype, but there is no required plug in or download. 
  • The APIs are simple.
  • Google and others are pushing it. 
  • It is a mix of standardization and pragmatics.

 

Doing this is not as easy as the simple examples suggest.  There are many complicating issues:

  • Firewalls and how to get through them.
  • Lots of user experience issues (Comment – let’s start with the fact that it currently doesn’t support IE) 
  • How to handle back or reset in a browser. 
  • Signaling really does need to be addressed. 
  • There are SIP devices that won’t work with it (apparently this includes a popular one sold by BT). 
  • Microsoft and Internet Explorer don’t support it.  They are starting to pay attention, but it is not clear when the will support it.  They want changes. 
  • Apple isn’t supporting it.  The Safari team likes it, but Apple has competing technologies (FaceTime). 
  • Standardizing Video Codecs.  They might get some standard now, but there are new HD codec standards to be dealt with.
  • Everyone wants to use HTML5, but it’s not finished, and some browsers won’t support some parts, so companies just have to deal with it. 

 

“WebRTC isn’t a standard, it’s a movement”.  It’s accessed at several levels.  Via APIs, Apps, Javascript, etc.  The original view was that it would be for Javascript and just define APIs.  What’s happened is that there are now companies like Phono getting into the business that provide pieces of a solution using WebRTC that interface with developers of traditional web companies.  (e.g. interwork with British Airways to add voice to their reservations/status app).  This required more than just Javascript.

 

A hierarchy is developing – Live ninja specializes in interfaces to call center type applications, but runs on tokbox which provides the webRTC based platform. 

 

Amazon has a “Mayday” button that communicates with an agent – you see the agent, voice goes over your mobile phone, and the agent can see what you are doing for support.    It’s not clear whether this one is a WebRTC application today or not, but an obvious candidate. (Comment – at some point soon after the conference I started seeing advertisements for this service  and I believe the voice actually went through the device, not your phone.  The ability to design a service like this once, and have it use either the audio capabilities of the device or an associated phone depending on device capabilities and user preferences would be a major advantage for WebRTC, if it can deliver.)

 

New applications include interfacing other devices, TV and Gaming, and getting network based tools.  For example, PeerCDN, a content delivery service that works peer-to-peer (i.e. find someone who has already downloaded the content you want and peer off them, which comes with all the security and legal issues around other peer-to-peer services.)

 

Other examples include telepresence (you as a robot appearing somewhere else), or security (PC + motion sensor as a baby monitor).

 

First use cases for WebRTC are enterprises (Call me, call centers, etc.)   Consumer applications are starting to appear (video calling “Chromecast”)

 

There are lots of flavors of WebRTC gateway (to IMS, to PSTN, to IP-PBX, and Machine-to-Machine).  There are also lots of vendors, (including all the major telecom vendors).

 

Not everyone wants to interoperate with the PSTN.  In fact some deliberately wan to avoid that.  The highest value applications may be of this kind.  He gave an example of a dating site, on which neither the user nor the operator wants to expose user phone numbers.  (The real problem seems to be the numbering plan and exposing it.  He had lots of examples of companies that had no interest in PSTN interworking.)

 

He put up a list of technology conflicts:

 

  • HTTP(S) transport vs SIP
  • RIA 2.0 media vs RTP
  • Encrypted vs. cleartext (WebRTC has mandatory encryption)
  • Opus vs. Mu-law codecs
  • VP8? Video vs. H.263/H.264

 

The solution is probably to build a gateway of some sort, but this is awkward.   HTTP to SIP is easy – use Javascript to build the messages, wrap it in HTTP and forwards to IMS.  But it still means some kind of gateway, and the media might not be compatible. 

 

He went through many other configurations, use SIP in various ways, using REST, using various new protocols, all had drawbacks.

 

He talked about WebRTC on mobile devices. Some of the issues include:

  • It doesn’t fit into the “App” model. 
  • A browser isn’t a natural interface for a mobile device. 
  • The Opus Codec is more compute intensive and battery draining. 
  • No native App friendly APIs yet – some out there, but ugly. 
  • SIP is not efficient in the mobile world (too wasteful of bandwidth and too many timeouts, which might time out or just waste battery). 
  • The audio hardware varies, especially on Androids. (e.g. multiple microphones, different audio capabilities, etc.)

 

Both Chrome and Firefox support WebRTC on Android though.  (But if you want to use it use a headset, don’t expect echo cancellation to work well enough to be usable).

 

Identity is an interesting question – what identity do I use?  Users are used to having different identities for different applications on the web (Comment, or the ever popular anonymous.  Nothing drives me off a site faster than demanding that I identify myself just to get information)  Users don’t want to mix them.  (Both presenters emphasized this, including a survey from Orange on identity:  People want the ability to lie about their ability, to delete information, to have multiple identities, and to tell different things to different people.  (Comment, I published a paper in ICIN 2001 on this topic proposing that the integration of Internet and Telephony accommodated these kinds of needs (I used the concept of multiple identities to suit different roles that you play), though I couldn’t present it because of the combination of my retirement, and 9/11/2001, which prevented the people I asked to present it from attending as well)

 

To wrap up he showed interesting projections for revenue for telecom operators.  The US is in decline, even in Mobile!  Spain showed a huge drop in messaging revenue and projection to tail off to practically nothing.  (The projection for 2018 went from 100 billion in the US to under 60, while the Messaging in Spain drops from over 2 Billion to maybe .25. (Comment – Yikes!  No wonder our industry is in big trouble).  (Second comment – over course I’m not exactly sure what he includes in those US figures.  That might not include data.)  There is more downside pressure than upside.  This is probably the first year that the overall telecom industry contracts worldwide.    

 

He suggested part of the problem is the phone call centric model, and that is a historical accident of the way the network was built.  (Well, maybe, but I think it’s more than that – when it was created there weren’t really alternatives.  The phone was something really new and different and the network was correctly architected around it.  The trouble is that that view didn’t evolve with the arrival of pervasive computing, but that’s always the problem with long lived systems.)

 

He had a chart on strategies of carriers with respect to Over The Top (OTT) technologies like WebRTC, ranging from trying to exploit it, to trying to bundle vital service with voice, to blocking or charging for web telephone over the top of data service, to maybe  simply opening it up and telling customers “bring your own voice”.  (Comment – I will again say that in about 2007 I was called as an expert by a mobile carrier trying to decide if “data only” was a viable business for them.  I believe there is money to be made in access, but they need to do it better than competition)

 

He posited that WebRTC accelerates all trends – It makes voice decline faster, and maybe kills Skype?

 

What percentage of value comes from 3GPP?  The old view was that IMS is in the middle of everything.  The new view is that IMS is a tool for replacing old technology, but not important to all these new application areas. (I think the message was that the shift in focus is really a defeat for 3GPP.)

 

His take on WebRTC for telecom operators was that WebRTC is emerging very fast, it’s important to be part of the community, and key, don’t confine it to IMS and traditional “integration” – no more than 30% of a carrier’s effort should go there.  Every unit in a Telco with a website should be using WebRTC.

 

(The speakers’ information is at www.disruptive-analysis.com)

 

 

Identity was a major topic of the Q&A.  One thing they pointed out is that we should let users manage their own identity.  One reason Blackberries were popular was that they had an identity that wasn’t mapped to anything else.  On the other hand, there is value to business and consumers in some cases to have certified identities.  When someone calls from your bank you want to have some independent proof that the person is a representative of the bank.  You want to know when contacted by a government agent that the person is legitimate.

 

Question?  What should Telcos do?  Go with the enterprise.  Will Telecom Italia make money with Tokbox?  Probably, but keep in mind there’s nothing on the horizon to replace the 20 Euros/month carriers now make off voice.

 

Another response – you can’t make money charging by the minute for a web service – not even the Pornography industry has managed that.  WebRTC offers the ability to offer services to someone outside your coverage area (e.g. make a WebRTC call on your home carriers account rather than use roaming.)  (Comment – yes, this has been out there for a long time as a possibility)

 

“We rarely advertise to users to talk more” – young people are not using the phone as much.  WebRTC offers the opportunity to put voice into more areas and perhaps stimulate interest in making phone calls (Comment – interesting, he’s right, phone companies used to promote usage, now they promote new phones or features more, and young people don’t call, but I don’t know how much potential there is to push the situation back)

 

Another suggestion is pay attention to things like TURN – by putting servers in places that create locality a carrier might gain revenue or reduce costs.

 

Question: (Stuart Sharrock) – what’s the right business model?  Answer – make sure you put WebRTC in an autonomous business unit.  Telefonica did this right.  Telefonica digital was set up to adopt things like this, Telefonica Corporate wasn’t.

Acquire the capability and pick the right people -- TokBox was originally flash centric, wound up WebRTC.

Conference Program Tuesday

 

With the smaller number of papers, the technical program for the conference was presented in a single track over two days rather than the past 2-1/2 or 3 day program including some parallel sessions.  As a result I had the opportunity to attend all the presentations.

Keynotes

Roberto Minerva (Telecom Italia)

Telecom Italia was our host, supplying the conference venue in the futures center in Venice.  This is a former abbey in the oldest part of Venice (between the Rialto Bridge and San Marco square).  The abbey was converted to a barracks by Napoleon’s  army, and eventually taken over by one of Italy’s early telecom companies assumed into TI during the consolidation of telecom.

 

He talked a bit about the history of Venice that tourists rarely visit:

  • Venice began as an island city state,  an important port trading with the middle east and the Byzantine and Persian empires.
  • The Venice Arsenal still a military facility not usually open to tourists was the biggest military supplier in the world (The word Arsenal is in fact Arab and dates back to the days of the Persian empire.
  • Venice was conquered by Napoleon, which began it’s path towards integration into the rest of Italy, but also had influence on art and architecture.
  • During the 19th century, Venice was connected to the mainland by a causeway carrying road and railway traffic.

 

The bridge to the mainland transformed Venice, allowing people to enter from land.  It was done to let people come to Venice, but it had the opposite effect – many people in the city moved out leaving what remained as a tourist center.  (Comment – this is exactly what happened in Whittier Alaska, a fishing/harbor town built as an ice free deepwater port in WWII that was originally accessible only via railroad.  During the 1990s they designed a way for a highway to share the tunnel used by the railroad to access it, hoping to develop it into a tourist center like Seward or Valdez, but in fact the new accessibility caused half the population to move out of this isolated location and the tourists who come to access the ferry and a couple of tour boats to spend little time there..)

 

Roberto talked about how our networking technology applies to Venice in various ways – location tagging of sites, communication to overcome the divided nature of the city, etc.

 

New Security Challenges Facing Cloud and Mobile Expansion (Juan Velasco, Aiuken Solutions Spain)

He began by talking about the origin of electric utilities and the battle between DC (Edison) and AC (Tesla), as an example of some of the technology battles taking place now among competing communication technologies.  The notion of an electric grid is analogous to “the cloud”.  At first people resisted connecting to an electric grid because they wanted to know where their electricity was coming from and control cost and reliability, but now it’s a utility we depend on and leave to others to provide.

 

He gave some examples of getting efficiency in cloud implementations

 

Mobile data traffic is basically doubling every year with no apparent leveling off  (Comment – haven’t I heard this somewhere before?)

 

Our increasing reliance on mobile devices comes with risks.  Loss of devices (losing data, privacy) is a big one.  Connecting personal devices to private networks is another, but “Bring Your Own Device” is an increasing demand.  He suggested that enterprises actually lag consumers in use of mobile devices (more use of desktop computers). 

 

One of the big problems we have on the internet is anonymity.  There’s no way to identify securely where people are coming from.  (“On the internet nobody knows you are a dog” is still true).  The problem in his perspective is IPV4 and NATs.  With the number of devices now constantly connected this is not sustainable. 

 

The real problem is denial of service attacks.  This has become much more of a problem as the number of spammers, botnets, and other threats has increased. 

 

Architecture Evolution – some observations on recent developments (Ulf Olsson, Ericsson AB)

 

The network has evolved through mashups – combinations of things people have done before.  Networks are ideally suited to support this through service exposure (much as envisioned with Parlay).

 

I missed much of his presentation because I was dealing with travel problems, but he basically highlighted the ability of the network to be key in analytics, and the role of ICIN in providing a forum for discussion of network evolution to match the challenges.

 

Strategies for the creation of new network services (Naoki Uchida, NTT)

 

He is leading a project on strategy for distributed services.  He began by outlining the “Valley of Death” – actually several gaps in the implementation that must be crossed to reach success.  The one he addresses is in technology (others are in business and promotion). 

 

He presented a service model built around a hierarchy of services (super-classes with abstractions of services, services, capabilities, and primitives). 

 

He showed a movie with a concept for video assisted support service, where a customer with some kind of video device and a bunch of cables was trying to assist a customer in connecting it.  This used something they called “Airstamp”, which allowed the support agent to mark things on the video display to illustrate what was being said (e.g. which cable the agent is talking about when he tells the customer to plug a cable in).  The innovative part of the service is that it remembers what part of the scene was tagged, and continues to highlight that object in the display even when the camera position changes This is important for the  intended application using the camera of a mobile phone, since you want the highlighting to remain in the same place if the user moves the phone).  (Comment – I bought this at the time, but in retrospect I wonder whether the application wouldn’t have worked just as well if the agent was allowed to freeze the display)

 

He also showed an application called “Sightfinder”, which alerts the visually impaired to obstacles by using recognition technologies on a video stream from a phone they are carrying.  This has been presented at ICIN before, I’m not sure whether this version is more advanced.  This is quite complicated, and I expect that performance of the system in an unconstrained environment.

 

Another application of the same underlying technology used voice to query and report about objects like product packages on the shelf of a store. (Comment – there are many applications like this in the US that use UPC codes to identify the product, I think this particular system doesn’t rely on them, which enables browsing products like produce that don’t have them.)

 

Another application used highlighting to indicate colors the user could not see (i.e. for someone with color blindness selecting produce to distinguish oranges, grapefruits, apples, tomatoes, etc.)

 

He showed an architecture for this service that allows applying intensive processing to the media stream, and incorporates HTML5 and WebRTC.  He talked about applications to business, mobile, and personal services.  He then showed a smart TV prototype.  It uses Linux and Android in the home with network components.  He showed a scenario for interactive picture book reading (pre-school instruction) based on this.  He talked about “Informative City”, which allows you to get information about your environment through your mobile device.  You can also share the information with other persons. 

 

Another video talked about WiFi location based services.  This used location information based on the base station your device talks on to track where you are.  Services included recommending an elevator to someone approach a stairway that could be difficult.  There were others that looked just like normal information services with only limited location content.  (Comment – probably because the location capability is not that precise)  Mostly this seemed to be more aimed at giving information to anyone who appears in range of a particular access point.  They have apparently developed a streamlined browser for this that minimizes power usage (a problem with WiFi and most mobile devices.)

 

For supporting this work he presented 3 types of service development programs:

  • Type 1 – direct development for mature products
  • Type 2 – service concept visualization in prototyping
  • Type 3 – core R&D

 

He showed a video of a “Type 2” project.  It illustrated “Remote Walker”, which basically lets a viewer see remote places by choosing their visual viewpoints and fields.  The system captures a 360 degree view, and then the user chooses the field by orienting a tablet or smartphone.  Users can pick which viewpoint they want by moving the tablet.  Live media processing was a 3rd part needed.  The video illustrated a business application of someone watching a remote presentation and picking viewpoints based on where the speaker stood.  (Comment – many years ago I tried to convince a friend who worked in the media presenting golf tournaments that the ability to pick which of all the cameras on the course the user wants to view would be a capability that users would pay for.  I believe the same applies to other sporting events (e.g. follow your favorite player).  He told me that the limitation that prevents this isn’t cameras or bandwidth per se, but cables.  The camera feeds have to be selected and consolidated in stages and there’s no place where they all appear because of limitations in getting cables into trailers.  Maybe technology has solved that by now)

 

Session 1: Network Functions on the Move (Marc Cheboldaeff, T-Systems)

The theme of this session is on functions that are moved by trends in networks, like exploiting cloud computing and virtualization.

 

The Virtual Set Top Box (Walexandra Mikityuk, Telekon Innovation Labs, Germany)

Today, set top boxes to deliver digital TV are proprietary and closed.  Extending them is a major problem for operators.  By migrating the functions into the cloud they gain scalability and capacity, and solve both a cost problem of dedicated processing and a management problem of managing upgrades.  HTML 5 is an enabler for this.

 

The approach can be thin client, based on browser in the physical set top, or zero client, where all the rendering is done in the cloud.  They started with a thin client and want to migrate to zero client.  There are also variations in how video streams are merged going to the display and whether all combination is done in the cloud or whether  two streams (content plus User Interface) are sent to the endpoint and combined locally (they used the latter in the prototype).

 

They did some experiments and actually found that cloud rendered UI had lower latency for many operations.  The explanation given was that the UI requires several interactions with cloud based servers to present each screen, and this happens faster when the UI is generated by a server in the cloud than if each of these is a transaction directly from the endpoint.  The architecture poses many challenges in privacy, licensing, etc. because it changes assumptions about where software runs. 

 

Question:  What’s the reaction from TV providers?  Answer: Mixed.  Some interest, but they haven’t reached the right people. 

 

(Comment – several years ago I did some exploratory work with a startup that was looking to put a sophisticated service environment on a set-top box to provide telephony and multimedia services.  Set Top Box capability and variation was a major barrier for them.  Having many generations of “old” set top boxes in the field was a significant problem for the cable operators as well, but they were costly to update because most updates required sending someone to the user to replace the old box.)

 

Question:  Doesn’t this place a big burden on the access network?  Answer – yes, (I think she was talking about putting the service in the access network to keep it local.  I think she also talked about caching locally).  (Comment – I don’t know that this is a problem because the network has to be able to support HD video to each customer and increasingly unique on-demand feeds anyway)

 

Question (Rebecca Copeland):  How can we use this to make the user experience better?  Answer – consistency – everyone gets the same experience and gets “upgrades” instantly.  Rebecca also asked about multiple device households and I’m not sure she addressed that.

 

(Comment – anyone who has had to replace a cable modem or a set-top can appreciate that avoiding the need to physically replace and re-register the equipment is a major plus for the user)

 

Maifesto of the Edge ICT Fabric (Antonio Manzalini and several other authors from Telecom Italia, research centers, and universities.)

 

Important trends current trends include faster, cheaper, generic hardware, Open Source software, Software Development of L3-L7 as virtual functions instead of middle boxes, and an explosion of connected devices with massive communication needs.

 

He showed projections of traffic 2011-2016 with a 29% CAGR (much less than some other views).  The traffic is overwhelmingly data and video (VoIP and gaming are negligible).

 

He presented items on software defined network and network function virtualization, which both involve decoupling control and processing from embedded network elements and moving them to standard servers remotely. 

 

“Fog computing” – a Cisco term that extends cloud computing with the notion of keeping locality to users and handling dense user populations to support mobility (I think what we used to call mesh networks).

 

He gave some interesting views on socio-politico-economic issues related to networking “Complex systems come at a cost, it’s hard to turn back”.  Our tendency is to continue to optimize and interconnect, but as we do that systems become fragile and decisions aren’t easy to undo. 

 

The concept of edge-ICT is of massive numbers of standard servers at the edge of the network (up to the customer premises) which are available to perform virtualized network functions and services. The obvious advantage is huge scalability.  This is a shift in strategy for operators – a shift from cost reduction to looking for new opportunities.  (Comment – I’m not sure it’s new, this is the manifesto of the organizations that participate in ICIN.  Network intelligence was never about cost reduction but about new service enablement.  The new thing may be decoupling completely from location)

 

(Comment – I don’t really get this.  It sounds too much like IN, application servers, and other visions where by installing the capacity to process things network operators hoped to attract new revenues, and I’m not sure why it will work this time.)

 

Question:  Why highlight Agriculture as a potential application?  Answer – it solves business problems for agriculture).  (Comment – I think what was really happening was that agriculture is a sector for which he had adequate data to make forecasts)

 

Question:  Max Michelle, Orange.  You claimed this would reduce backhaul costs, but with distribution you may wind up needing more bandwidth, not less.  Answer:  The trend towards smaller and smaller cells will mean dramatic increases in backhaul.  Deploying servers in those edges reduces needs.  (Comment – Yes, but now you’ve put servers into environments that today are not environmentally controlled and designed to support them, and you create a big problem in maintenance and upgrades for all that distributed hardware.  Like I said above, I don’t get this.)

 

LTE Policy Control Features (Ruth Brown, BT Research and Innovation)

 

The presenter was giving her first paper with a co-author from BT.  The real focus was on what BT or another operator can do in policy control to differentiate themselves from OTT players.  She gave a projection of global mobile data increase of 13X by 2017 (Comment – that’s basically double every year, like one of yesterdays tutorials assumed)  Data rates increase dramatically because of LTE deployments (76 countries and 200 networks by 2017). 

 

The evolution from 3G to 4G eliminated the radio controller layer with proprietary interfaces between base stations and the core network.  4G is generic and standard (All IP).

 

The architecture, plus the dramatic increase in variety and volume of traffic flowing through the network creates an increased need for policy control to insure that each type of traffic gets the performance it needs.

 

She gave a bunch of examples of policy enabled services aimed at individuals, home/family, and business users.  (Most did not involve real-time processing of data flows, more at the session control level – usage limitation, parental control, authorization for employee devices to enter a business private network, etc.

 

She gave the evolved packet core architecture (basically IMS, with centralized subscriber data, policy control, and policy enforcement points.)  She overviewed the things that policies can control (QoS, queuing priority, flow filters, matching on IP address, protocol, etc.). 

 

She gave an example of limiting user bandwidth based on monthly consumption relative to a limit (i.e. throttle the user down as the monthly limit is approached, then allow only a trickle for “fair use” until next month.  (Comment this is an example I’ve seen before.  Presumably LTE makes this easier than the ad hoc approaches that were used in other examples)

 

She gave some future examples:  HD Voice, the ability to up-sell “HD” video for a special event, and the ability to provide consistent policy across multiple access networks (WiFi and LTE).  (Comment – I don’t understand the last one.  WiFi to me means access to any access point with no real control over what bandwidth or policies it supports.   Perhaps this is limited to WiFi access points provided by the network operator or a cooperating partner.) 

Location based triggers for policy (Not sure exactly what this was targeted at, but perhaps it’s something like “you get more bandwidth to interact with our store when you are nearby”).

 

Question (David Ludlam):  What’s the regulatory environment for this?  Will incumbent operators be required to offer these capabilities to others?  Answer – not sure yet, maybe.

 

Question (One of the WebRTC speakers):  How good is the forecast of a data Tsunami, he wasn’t sure it’s coming true because of pricing policy changes and wasn’t sure how much of that winds up on WiFi outside the operators network.  She answered that it’s more based on internet of things.  Followup -> “Is there any point in applying policies to the 20% of the user’s smartphone usage that’s on LTE vs WiFi.  Answer:  She really talked about increasing adoption of LTE and maybe Policy enabled WiFi.  (Comment – I wonder what the real ratio of WiFi to LTE usage will be and whether policy managed WiFi will become the norm.  While that seems unlikely, given the number of WiFi networks, increasingly you see WiFi endpoints provided not by end users but instead leased from broadband providers, which may increase the likelihood that policy managed service becomes a reality)

 

Question (Roberto Minerva):  In Italy you can buy 10Gig of data for 15 euros (Comment – a nice price)  Policy is nice, but we need to look at the benefits with respect to the cost of the policy infrastructure.  Did you make any calculation like this?  (Not sure there was an answer)

 

Session 2:  WebRTC, Andres Lundqvist (Oracle)

He started on the topic of transitions, showing major church in Istanbul, which was built as a Christian church, became an eastern orthodox church, then a Mosque, and ultimately a museum.  Transitions are a natural part of evolution, and WebRTC is positioned as a transition in voice services.

 

He gave a history, looking at ICIN 2005, which had a major theme on Peer-to-Peer services.  Skype had been sold to eBay.  Then Microsoft paid $8.5 Billion for them.  Nokia was bought for $7.2 Billion.  He sees this as inconsistent.  Voice usage is increasing 15% even if revenue is down.  Most of the growth in minutes though is Skype.  (His chart showed usage growth rates of 10-25% in most years in the past 15.  (Comment – I’m not sure this is consistent with other figures that show usage decreases based on saturation and a shift from voice to other communication forms by younger people)

 

What’s WebRTC?  Peer-to-peer, open, no plug-ins or downloads, multi-platform and multi-device.  (Comment – I believe it CAN be peer to peer, but like Sip it doesn’t have to be!)

 

Catalysing the Success of WebRTC for the Provision of Advanced Multimedia Real-Time Communication Luis Lopez Fernandez, Universidad Rey Juan Carlos)

This work is part of the FI-ware project, a European Commission research project on Future Internet.  He showed a long tail curve for video solutions.  Facetime, Skype etc. fill the mass market needs, Webex and others fill specific purpose video conferencing, but there’s a potential for a lot more certainly not served today.  WebRTC has the potential to serve that market. 

 

WebRTC is definitions and APIs but it is best described as a framework for a solution.

WebRTC is good point to point but lacks group communications, interoperability, and value added multimedia (Comment – I think what he’s saying is that it’s immature and hasn’t focused on this)  The risk is fragmentation.  Without a standard signaling plain there’s no guarantee of interoperability.  Microsoft is pushing alternatives, and there are various technologies used (e.g. flash) which enjoy limited support from browsers).

(Comment – the history of internet standards is not encouraging, in that beyond the early pre commercial internet standards like IP, SMTP, etc. standards have generally followed whoever or whatever won in the marketplace rather than being something anyone did up front.)

 

His strategy for defeating the fragmentation was open standards and open source (simplifying the development process).  (Comment – a reasonable approach.  I believe the reason that SIP “won” over H.323 for voice is that its developers gave it away.  Likewise HTTP/HTML won  in the early days when browsers were open source and/or free.)

 

Their implementation is based on Enterprise Java (JBoss) and standard Java APIs for communication.  The support the most popular codecs and formats (They have a media server that does this).  He gave some examples of using their platform with code for the media server and client halves.  (Comment – it’s difficult for me to interpret the “ease of use of the API” that he and others are clearly trying to demonstrate.  My experience was always that the session/media control aspects of building a service are always pretty straight forward.  The complexity is in exception handling and operations, which aren’t in the simple examples, and “demo” applications don’t really address.)

Office in the Cloud: Web-based Cloud Platform for Telco Services (Masafumi Suzuki, NTT)

He started with a review of HTML 5 and its impact on Telcos:

  • It can restructure OTT driven application/content markets (with the potential to be more open I think)
  • It provides new opportunities for telcos based on scale (one source, support for many devices and applications)

 

Office On demand is aimed at small and medium businesses which includes hosted PBX, teleconference, call center and other telephony services but also business and office applications internet mail and web.  (Comment – I heard this one many, many times before, going back to AT&T’s ill fated ACS offering in the 1970s.  The trouble is that cost is critical for those users and for most of the time at least the cost overheads of a big company like a carrier made it impossible to offer the service at a price point that was competitive with what small companies could do for the “applications” piece. Secondarily, they don’t have the business sector expertise to get the applications right.   I don’t see WebRTC or HTML 5 changing this, since the problem isn’t technology, it is structure and expertise.)

 

He tried to show a service video, which failed to run continuously.  He did show some demo slides  (Again, I don’t think the problem has ever been technical feasibility, it’s been depth of coverage of the business and economics that drive businesses away from these kinds of solutions)   He also went through some scenarios. 

 

One thing he showed was delay time with their prototype.  It was much more subject to variation than a stand alone SIP phone or SIP soft phone implementation.  (Comment – not surprising, given that they routed all the media through a media server.  What was interesting was even the standard SIP call had 238ms of delay.  That’s a huge amount.  In my early work on packet voice, we were pushed by the people who evaluated call quality to keep delays very low, well under 100ms, in part to avoid the need for echo cancelling, but also because user expectations based on circuit telephone connections were for the delay not to be noticeable.  Maybe mobile phones and VoiP have retrained users to be more tolerant?)  They “solved” the variable delay problem by limiting load and adding another 40ms of buffering into the call

 

WebRTC, The day after – what’s next for conversational services (Emmanuel Bertin, Telecom Sud Paris)

The paper had several co-authors and represents a university/industry collaboration.  He began with his own review of WebRTC.  Designed for any device that embeds a browser (PC, Smartphones, but maybe TV, etc.)  It has standard APIs inside the browser (Curiously enough his slide showed Internet Explorer – okay, his next slide showed what’s implemented, only Chrome and Firefox, and only the pieces dealing with connection setup, not QoS, media manipulation, etc.).

 

Again, he highlighted the lack of standard signaling, and the standard model will probably include some kind of conversion in a network server that both endpoints communicate with.  He indicated that the downside here may be a non-interconnected world – facebook users in one universe, Skype in others, Google in another, none of which interwork. 

 

He went through a set of scenarios on the evolution of services towards “Webification”.  In the 1990s, everything was client/server (Web, IM, Voip, etc.)  By 2005 you had a “Webapp architecture that used HTTP for a browser to access a web server which also could gateway to IM, email, etc.  In 2012 WebRTC brings voice into this same architecture.  (Comment – many users still have strong preferences for which architecture serves their communication needs.  Some are comfortable with Web email, but many find the interfaces clunky, don’t like the lack of control over changes to their tools, and don’t find the need to be connected to access it acceptable.  Some are comfortable with a client server architecture where the storage is on the server (like Imap), while others really want their data on their device and don’t care about access from multiple endpoints.  I don’t know whether this is a real difference in people’s needs based on work habits, or a “been there and got burned” reaction to poor implementations)

 

He went through issues for services – standardization of features (outdated), and many entities in the signaling path (he was talking about the IMS architecture here), and finally tight coupling of the solution (again I think his model is IMS).

 

Discrepancy between architecture and protocol choices was another issue.

 

Ongoing work:  3GPP is standardizing a WebRTC/IMS gateway (What we were told yesterday about how many applications don’t care about interconnection with the PSTN may limit the value of this)

He also cited work on migration towards a full web architecture (It wasn’t clear to me what characterized this other than perhaps fully integrating connection to IMS).

 

(Meta-comment – the presentation used a slide changing effect that caused the current slide to move off the top of the page and be replaced by another from below.  Personally I found this beyond annoying (i.e. it made me a bit dizzy to watch it because my eyes wanted to follow the current slide off the page.  I suspect that people who spend more time flipping pages on a smartphone are more used to this, but I do wonder whether anyone ever evaluates how “viewer friendly” their visual effects are.  In the early days of Powerpoint presentations there were people who seemed to delight in trying all the transition effects that came with the package – and the audience usually tuned out the content in presentations like that.)

 

Question:  One of yesterday’s speakers asked about initiating phone calls from facebook and whether they found a demand for that.  (It was just an illustration), but there was some further discussion on this and a viewpoint expressed by someone that we should use WebRTC to connect the diverse voice communication worlds of the internet into the E.164 global naming.  (Comment – of course the person who proposed this wasn’t at the tutorial and didn’t hear the comments on the lack of interest of various users of voice to interwork)

Some WebRTC Opportunities for RCS (Romain Carbou, Orange Labs)

 

He started with a comparison of RCS and WebRTC.  RCS is a standardized service, not technology, and uses a standardized implementation, vs WebRTC which is more a framework for building services in which neither protocols nor services are standardized.  (Comment – I think they solve different problems.  RCS basically gives you portability of services across devices and carriers through standardization.  WebRTC is a toolkit for doing things that aren’t standard.)

 

He went through the standardization roadmap for RCS. 

 

Their work was on interfacing the two, allowing a user from the web to connect to and use RCS services through a gateway, essentially extending RCS into web originated calls.  He gave some scenarios involving call centers.  Mobile users who have RCS can use it with a call center today.  Their gateway would allow the call center to be built using WebRTC and control the RCS calls.  (Comment – this seems like a very natural thing to do and will have value.  I believe the kind of issues raised in yesterday’s WebRTC session related more to the need for WebRTC to support a lot of non-traditional scenarios in which interworking is definitely not a focus because RCS isn’t relevant to the service being deployed.)

 

What are short term challenges:  Standardizing and deploying a network address book for uniform, device independent communication and social networking. 

 

Beyond that WebRTC is renewing the experience for web based social networking while RCS standardizes multi-media call experiences.  His view is that WebRTC will enable a new portable model of social networking, which is both an opportunity and threat to operators.

 

Question:  there is too much variation on RCS profiles by region and user type to allow efficient development of standard silicon.  They are pushing towards one profile for the US and one for Europe (Comment – Asia?)  How does WebRTC help or hurt?  Answer:  WebRTC won’t help or hurt this

 

Session 3: Network Transformation Using the Cloud (Dan Fahrman, Ericsson)

 

Cloud-Enabled NGN Architecture with Discovery of End-to-End QoS resources (Silvana Greco Polito (Universita degli studi di Enna Kore, Italy)

 

She started with the problem of finding a way for users connected to one cloud to access services in another cloud, and figuring the best path through intervening networks to achieve the QoS required for the services.  The framework they are working on integrates with IMS and adds a Path Computation server which assists in figuring best paths through networks.  They also have an element for discovering computing resources across domains.  A typical request would first ask for computing resources – locate them in some other domain, then ask for the communication resources to connect to them, and then compose the service.

 

Each cloud has local databases with information about local computing sources.  The control subsystem (finds computing) can access the local DB and send requests to those in other networks to look for what is needed (in parallel).  The solution uses Diameter and PCE to communicate between the servers.

 

They define new attribute/value pairs for discovery of paths.  They presented some performance results on scaling that showed how the number of requests per service grew with the number of domains.  She had graphs for NSFNET-like networks and Internet-like networks (the difference was scale and interconnectivity.)  Curiously enough, in the larger internet like networks the number of messages per request decreased with an increasing number of networks, while in the smaller NSFNET configurations it grew.  (Comment.  If she explained this I missed it.  I’d conjecture it has to do with how easy it is to find the resources needed, which cuts off the propagation of requests)

 

Question: are the paths reserved when computed?  (I Couldn’t hear the answer)

Question:  This is for control-pane paths, could it apply to user-plane paths?  (Answer – yes).

 

A Topology-Aware Adaptive Deployment Framework for Elastic Applications (Mathais Keller, University of  Paderborn, Germany)

 

He posed a scenario with data centers and users, and different users might be served by different data centers (for reasons of cost and/or latency).  Moving these associations to adapt to load variations, cost, etc. is desirable and the topic of their work.

 

Today applications are diverse, multi-tier, and generally stateful (i.e. retain information across interactions with their users).  This is a problem for moving since you can’t shut down one place and start another without loss of data.  How can this be addressed?  With hardware virtualization you can do this with no runtime changes (presumably you move the execution space of the program across the network and start it where it left off).  A problem is isolation. (I presume he meant what happens if the running server gets isolated from the others so you can’t move its state.)

 

The situation today is there is no generic solution, so every application must be manually integrated and configured for this kind of environment.  This becomes a barrier to swift adoption too.  (Comment – I’m not sure I’d go as far as stating that there’s no generic solution.  Migration of services in distributed architectures has been a topic of research for decades and many solutions have been proposed.  The real problem may be that those existing solutions are uneconomic or don’t meet performance requirements.)

 

He discussed a toolkit for building adaptable applications that can handle 3 kinds of reconfiguration:  Application level, Service level, or network level.  He showed a solution in their prototype based on a plug-in that handles the reconfiguration. (Many more details in the paper.)

 

Question (Anders Lundqvist) – have you looked at any relationship between the number of sites used and efficiency?  Moving from one to two clearly helps latency and reliability, but at some point adding the next site increases communication and complexity to the point where it overcomes this.  (Comment – this is a classic distributed processing question)  I think the answer was basically that this was out of their scope.

 

Towards Cross-Industry Information Infrastructure Provisioning – A Resource Based Perspective (Hannes Kuebel, TU Berlin)

 

(Comment – At the beginning of this talk we again saw an interesting impact of the conflicting requirements of international usability and local customization.  The presentation was in PDF, which required some work with the conference center’s Personal Computer to activate.  Unfortunately for him the version off PDF on the machine presented its user interface in Italian,  and the presenter couldn’t read enough Italian to figure out how to put the display into “full screen” mode.  Language independent icons help, but aren’t sufficient for complex tasks like this.)

 

Can cooperation between private telecom companies and other companies contribute to Next Generation Access Deployment?  (The question being explored was really about co-operation among utilities and specialized communication companies to deploy access.) 

 

He talked about different kinds of carriers, National carriers (i.e. the traditional Telcos)  Metropolitan carriers (generally full service carriers that compete only in a region) and Business carriers (Serve only the needs of a particular business).  They conducted research through expert interviews using “Grounded Theory” and “Case Study Theory” to determine whether these carriers can cooperate to build infrastructure that meets all their needs. 

 

They talked to 3 national carriers, with networks in the process of modernization and available right-of-way and ductwork.  They also had clear organizational assets to operate networks for themselves and for others.

 

They talked to the 3 most successful city carriers in Germany (over 70 entered the market, but the most successful are all utility companies offering communication services).  These carriers have both physical and organizational assets to deploy and manage networks (including right-of-way, and the ability to deploy lines), but there are some limits on their ability to manage technology.

 

They talked to 4 businesses carriers.  Typically these companies operate as wholesalers to other telecom companies and retailers to businesses outside Telecom.  These companies typically outsource the physical network to utilities. 

 

He showed some kind of summary of the analysis showing which things contributed to success.  In general there are more opportunities for cooperation than forces inhibiting it.  National Carriers are the strongest. 

 

The rest of this presentation was too fast to capture the details here.  There are more details in the paper.

 

Designing Carrier’s Online Storage “Family Cloud” for Enhancing Telecom Home Services” (Atsushi Fukayama, NTT)

He started with graphs showing subscribers for DSL and fiber-to-the-home ser vices for NTT.  Basically over the past 5 years DSL has steadily dropped and Fiber has steadily grown.  The growth of Fiber though is saturating and retaining customers is critical.  Their objective was thus to retain home customers by providing family oriented broadband services.  There was a special focus on the elderly (The young often don’t have fixed lines at all).  (Comment, both this and the other NTT paper today had examples of services aimed at the elderly, and I think this is partly because I believe Japan is ahead of the EU and the US in population maturity and experiencing the challenges of an aging population encountering limitations in independent living)

 

The problem for them is it’s difficult for subscribers to feel they are getting value from the network.  (It’s complicated, lots of applications and devices)  Therefore their strategy has been integrate services into home life.  (Comment – this is interesting.  In the US, the battle for loyalty is clearly around entertainment.  Companies tout their capabilities for HD and On-demand video, DVR, and access to popular channels more than other aspects of broadband service.  Even basic internet is sold based on it’s capability to stream video.  I wonder if Video isn’t bundled with broadband access services in Japan?)

 

He gave some service examples: 

  • Photo frame  The example seemed to be about easy retrieval of photos onto a home screen using an interface on a smartphone/tablet

 

  • Abstraction Video Phone This service hides the person’s physical details according to methods selected by the end user, presumably as a privacy mechanism.  The examples he showed displayed the people in outline or avatar form but the images would move like the real user.

 

He talked about adding value based on new technologies:  Image recognition, audio recognition, lifelogs and learning from lifelogs, etc.

 

A 3rd example was Smart Cloud Player.  I think this one would select appropriate content based on what it knows about you   It also had the ability to work backwards: select past experiences that relate to new input – the video showed putting an object (a dvd) in front of a camera, and the system offered up some videos that related to it. 

 

He showed a functional architecture for these services that included various databases (users, policy, media, etc.) and a  rule engine that worked to drive what was being done. 

 

He concluded by summarizing the examples and how they related to the goal.

 

(Comment – this isn’t the first “Family Cloud” paper I’ve seen from NTT. It’s clearly an ongoing project.  Like so many I’ve seen from NTT  the services are innovative and fascinating, but I wonder how they translate into different cultural and business environments)

 

Question (David Ludlam):  How long has it been in service and what effect has it had?  Answer – they have been investigating it for a year, but they are still trying to sell it to their own business team, it’s not deployed.

 

Question:  What privacy issues does this raise and how are they addressed when you have user data in the cloud?  (His answer was more about making it easy for the user to get the data into the cloud – automating picture upload from a WiFi enabled camera.  (Comment – one reason why I don’t have a WiFi enabled camera is a concern that it may be hackable to capture my pictures over a WiFi network that happens to be active when I turn on my camera.)

 

Session 4:  Improving Customer Experience (Max Michel, Orange)

 

Real-Time Privacy Preserving Co-Browsing with Element Masking (Bin Cheng, NEC Europe)

 

Co-browsing is used for many purposes: user assistance, social networking, shopping with freinds, etc.  The usual architecture sends all the input (keys, mouse clicks, etc) to a common network server which combines them and communicates with the web site, then distributes the pages coming back to the users.  The trouble is that you can’t protect user privacy.  The architecture doesn’t work for https based services either since you have to share your security credentials.  You need to fully trust the proxy server which does the combination.

 

In the proposed scheme the person who creates the browsing session establishes rules for what information is shared with other participants and what isn’t.  Establishing the rules can either be done by the application server and shared with users, or by the session creator (user) using a graphical tool.  “Hidden” elements are replaced by “fake” data that takes the same space in the display to preserve the same layout.  HTTPS credentials go directly to the application server from the end user (Comment – I wasn’t sure exactly how this worked from either the talk or the paper)

 

A Web Synchronization Method for Supporting Real and Non-Real-Time Web Communication (Kazuyuki Tasaki, KDDI Labs Japan)

 

This work covered both co-browsing with multi-media content (talking or video conference with an agent while visiting a web site) and non-real time examples. 

 

The architecture uses a conference server to collect information from the users and synchronize audio/video that’s sent to the users with the events sent to the web server and distributed to participants.  The concern is that with variable network delay the web output may not be synchronized with the audio/video.  Sometimes the problem is that one participant is not available and doesn’t get the information in real time but still wants to see it properly synchronized. 

 

He proposed techniques to address the issues using a synchronization server in addition to conferencing in the network to coordinate the information going to the web browsers of participants to account for differences in network time and rendering time among participants.  In a testbed, they ran endpoints with different bandwidth to create synchronization problems.  Without coordination, output got out of synchronization increasingly as the difference in transmission rates or the size of the web content grew.  Their synchronization techniques kept the differences in timing below 500ms.

 

Question (Bruno Chatras):  At ICIN 2012 there was a paper from TNO on synchronization.  Is there any connection to your work?  Answer – He wasn’t familiar with it.

 

WoW: Wallet on Wheels (Rebecca Copeland, Core Viewpoint, UK)

 

Why should a car have a wallet?  -- Cashless payments when travelling, micropayments too small for credit cards, enhanced security (based on having the car and the driver’s credential) but mainly a better way to pay for automotive services.

 

Why not just use smartphones?  -- Cars are less vulnerable? (Comment – are phones lost and stolen more than cars?  I wouldn’t know).  Other factors were:  Separation of travel from personal usage, and automotive services that use the car’s credential.  (Comment – a colleague from Bell Labs pointed to some article looking at how often his EZ Pass electronic toll tag was scanned and discovered that a lot of things are now capable of reading it and potentially charging it. )

 

Why Now? --   Mobile money is becoming mainstream, transport is going ticketless, some retailers going cashless, fraud is a big problem, and the lack of payment is limiting the growth of car related services.  (Comment, I found it fascinating that this paper came from people in London and Paris, not cities in which I picture a lot of people driving.  What they are clearly envisioning though is a lot of “pay per use” services invoked from your car.)

 

The technology is ready since most cars have processing power and communication connectivity.  One issue here is the relationship of cars and users.  You can have shared accounts for one car, a shared subscription but multiple accounts (car pools where each person might pay, but the car is shared), or a shared vehicle (like a business car pool or a rental). 

 

She showed an example of using the car key ”fob” to pay for parking or to open doors and even pay for a hotel.  She also talked about using the car “key” in conjunction with a secure ID device that generates a pseudo random key to be used as a second identification factor.  (Comment – multi-factor security like this works only if the two are really independent and not likely to be compromised at the same time.  I don’t know if that is true for this, since I suspect most people will keep the car keys, fob, and secure ID token on the same key ring.)

 

Cars in Europe will be required to interact with the mobile network to make an emergency call in the case of a crash.  (Comment – that’s news to me.  Not especially surprising, but I wonder if people have considered the privacy implications.  Mobile devices can be tracked.)  That means all cars will have an embedded SIM card and trusted environment.  They will have to address problems of activation and global naming so that this will work in any country.  This is a platform that you can use to build payment services on.

 

She talked about how the car can also enhance security by providing another authentication factor.  She went through some essential communication, including reporting accidents or security risks (someone breaking into the car), the user calling the car to figure out where it is (Comment – I like that one if it works)

 

At this point Rebecca got several “over time” warnings and sped through the slides on how this would all work in a business context and summarizing the key aspect, which seemed to be using the Car’s mobile authentication credentials in conjunction with the user’s credentials  to improve security and create a system capable of supporting payment.)

A User-Centric Context Aware Mobile Assistant (Noel Crespi, Institut Mines-Telecom, France)

 

The huge increase in mobile applications each focused on a single service is creating user confusion and difficulty in performing complex tasks.  (Comment – This is precisely what’s wrong with the “there’s an App for that” model.  They aren’t designed to be “mashed up” to create higher level services for the user.  Each is fully independent).

 

Their solution is an intelligent assistant, DoTT, which intercedes between the user and the apps to build more complex services (e.g. Send a message to Bob on his birthday).

 

The service used natural language and had a definition of the domain of application, which was establishing conditions under which pre-specified applications would be invoked.  (Comment – this looked fairly primitive.  I don’t know that it is capable of expressing some of the interactive services I proposed more than a decade ago as potential mashups – e.g. “Tell me when I need to leave for the airport” – taking into account my location, traffic conditions, my agenda which has my flight information, and the airline’s flight status data.  Again, the fact that the information is all encapsulated in independent “apps” never designed to be combined is more limiting than a web architecture.)

 

Much of the machinery for doing this looked a lot like the concept of service brokers (a context management system that controls how and when service elements are invoked), or service orchestration. 

 

Question:  The focus is on users, but wouldn’t this also apply to developers? (i.e. isn’t this a general mechanism for mash-ups?)  Answer – most of the application developers have no coordination, this is a way of handling that. 

 

Question:  Is this local on the phone? (yes).  Would it make more sense to centralize this and learn from what other users have done?  Answer – the intent is to do this locally.  Questioner said there is an app called “If this then that” for the iPhone that does things like this. 

 

Question:  What’s the relationship of your platform to the other applications?  What does it have access to to make decisions and what does it control?  Answer – yes they have that access.  Question:  How do you convince the developer to give you access?  Answer – he uses an Android capability for allowing applications to share with each other.   (Comment – yes, but presumably this only works if the other applications have been built to share)

 

Question:  Is this rule based?  Answer: Yes.  How do you prevent an explosion of rules (I think the question was how to keep from performance collapse as the number of rules rises.)  I don’t recall him giving a clear answer for this. 

 

This discussion continued during the break with many comments on security issues in both car communications and smart phones, and the essential conflict between opening interfaces that allow applications to cooperate, and security threats that allow an untrustworthy application to spy on others.  Memorable Quote:  It’s like a binary chemical weapon – two things that are harmless individually combine to form something dangerous.

Session 5, Business Models (Stefan Uellner, T-Systems)

Session Chair’s Introduction:  This is not a technical session.  It is about discussing the business models driving the adoption of the technology.

Roaming Unbundling – Challenges and Opportunities Roaming (Rogier Noldus, Ericsson Germany)

 

The unbundled roaming initiative came from a study of what roaming really is and does and resulted in a new EC rule to allow subscribers to choose an alternative provider for communication services abroad.  The issue being adressed is to avoid having picking a home service provider lock the user into an uneconomical arrangement while roaming, but instead allow competition to drive down prices for roaming, which today is so expensive it discourages use.

 

He questioned the talks which declared voice as in decline – it’s still the most important service.  Today, when you are abroad you may be served locally (Telecom Italia in Venice), but your service still comes from your home network (e.g. Orange in France).

 

The new notion is that you sign up for an alternative roaming provider before travelling, typically prepaid (but may be postpaid), and they support you when you are not in your home carriers service area.  (I think that he also said that the individual services, Voice, SMS, data, etc.) can be independently subscribed.  All this is done without having to switch the SIM card in the phone. (The way people do affordable roaming today is swap out their home SIM with one from a local provider, which gets a much better rate, but it also loses access to the user’s information and services and requires more technical skill than simply signing up for another carrier).

 

There is a lot of pressure on roaming costs already, and some are nearly at the level of local services.  The technical implementation of this basically exposes control of the services to the alternative roaming provider – like a MVNO.  Control is only for charging. 

 

A special case is local breakout – getting reasonable low cost mobile data.  There is a 3GPP answer that is charged through your alternate carrier, and another where you subscribe to the local network operator and are billed through them.  A part of this project is an effort to make a common APN in the EU (how you connect to the internet) to EU-internet.

 

Transparency is a big issue.  The solution has to apply to 2G, 3G, LTE, and circuit and packet services.  You have to make this easy for the user.  User shouldn’t be required to sign up locally in every country visited.

 

Question (Rebecca Copeland):  How does the user select which alternative provider’s network their data goes, there is local variation.  She also pointed out that the decision might most appropriately be made by an enterprise providing the device (or the service?) based on their evaluation of costs and non-cost factors like security.

 

Constructing a Multi-Sided Business Model for Small Horizontal Internet of Things Service Platform (Frank Berkers, TNO)

 

Serving applications for “the internet of things” is difficult because the platform must be able to support a large number of applications, be broadly applicable, and scalable for huge networks of endpoints.

 

Their platform is called iCore.  The architecture has a level to virtualize all the objects in the network, then a layer which organizes them into processing streams, and a service level which exploits both learning and real-world knowledge. 

 

He gave some examples.  One was a traffic information service that interfaces with bus companies, traffic cameras, and navigation systems.  Another was a service that aggregates information from this plus users calling in to report problems.  You might then add a courier which wants to use all this to plan routes and estimate delivery times.

 

The business model had a service side and a data side.  “Same Side” interactions are between companies that participate on one side or another, while cross network effects involve interactions between services and data/sensors.

 

Some of the benefits of the platform included:

 

  • Adding new sources of data to enhance services transparent to the user
  • Adding new services easily.
  • Economies of scale and scope because of shared resources and data, as well as real-world knowledge.
  • High number of applications and data sources, including dealing with heterogeneity.
  • Automatically deal with dynamics in service execution and deployment (recover from server failures, etc.
  • Service selection done in natural language.

 

He went through some examples of how the platform helps extending services – re-applying processing developed for one domain in another, merging domains to apply the same processing across all of them, and outsourcing aspects of services.   

 

He concluded by indicating that this is a difficult problem and their platform is only a small step.

 

Question:  Steffan Uelner supported the claim that this is difficult and asked who the partners developing this were:  Telecom Italia and  a major bank were among them,

 

Question:  Knowledge processing has been a vision for decades, is there any new breakthrough that supports this?  Answer – nothing new in knowledge processing, but it hasn’t been applied to this kind of problem yet.  (Comment – I’m not so sure, there have been so many applications of what we call knowledge based systems it’s hard to find a problem that hasn’t at least been attempted.)

 

Question:  Do you have a strategy for getting an ecosystem of service providers going, especially early when you don’t have critical mass?  They have defined 5 application domains, they know what players are in each. (Presumably these are who to recruit.)

 

Recommendation as a Service (RaaS): New Challenges for, and Evaluation Metrics of Recommender Systems, Gerald Eichler (TI Labs)

 

Recommendation systems are important as a business because they are the way that business can attract users.  They are in the same space as advertising and “teasers” (e.g. film trailers).  (I believe recommendation is a better source of customers because the user seeking a recommendation has already decided he or she needs to do or buy something).

 

Recommendation is essentially a mapping, and in the space where there are items (things to be recommended) and users, you can have user to user recommendation, user to item recommendation, etc.)

 

Device centric recommendation is well established (e.g. what songs should I put on my Ipod?, what books might I be interested in getting on my ebook?) 

 

He went through the way this is typically done by establishing relationships among items in the domain and using a profile of the user’s interests and filtering to determine the recommended items.  He plotted this in a 3 dimensional model based on how the recommendation model was done, how the profile was done, and the representation (details in the paper).  The real goal here is to abstract this machinery to allow it to be sold as a service to businesses looking to develop a recommendation based service.

 

He went through several more analyses of different alternatives to creating recommendation services. 

 

Question (Rebecca Copeland): Current recommendation services for things like Hotels are often quite bad, partly because of commercial interests but often because the input from users is noisy.  How do you insure that the result is useful?  Answer – he talked about communities and the value of communities.  Rebecca re-iterated the problem.  (Comment – The real problem is that recommendations based on user input work best with products that fit in narrow categories.  That’s why they work with things like books, movies, and music.  If both I and another user liked a particular mystery book or jazz recording, odds are that I will like another mystery or jazz recording which the other user liked.  The trouble with hotels and restaurants is that they are far more complex and don’t fit in simple categories.  Someone else might like a hotel because of it’s location and the fitness room while I liked it because of the bedding and the free breakfast.  Having the other user like another hotel is not going to predict whether I will like it if what’s important to me about it is different from what’s important to the other user.  That’s one reason why the ratings are “noisy”. )

 

Question:  Is the problem that recommendations are today provided within some service as a way of bolstering the usage of the site?  To get unbiased true recommendations it has to become a service, not a feature of a site, but who pays? (I don’t think this was answered.)

 

Question (Max Michel):  Today the web is full of “trolls” who post to influence recommendations up or down because it’s anonymous.  Could you improve recommendations by providing filtering and avoiding this?  His suggestion was yes, but it was clear how.  (Comment – this is a basic issue with all recommendation services, and I don’t think it can ever be completely addressed.  You can force people to identify themselves, but as long as it is easy and free to register new identities someone with motivation to “stuff the ballot box” will do it.)

 

Session 6:  New Architectures for Service Delivery (Bernard Villian, Cassidian, France)

 

Bernard had many years with Alcatel, and may be the only person left to have attended every ICIN.  The session is a traditional ICIN topic (Service delivery).

Moving to Content-Centric Networks (Mike Nilsson, BT)

His presentation was a lot about watching video (TV) and how to reduce the impact of TV on the network.  In the beginning, everything was live, over time, more and more was recorded and played later, to the point that now relatively little is live, and people have a variety of ways to watch:

  • 50% watch behind some kind of recording device
  • A significant fraction use alternate ways to view through streaming.

 

Still, most people watch live TV (20% of people at 9PM). He showed curves for viewing versus time of day in 2002 and 2012, and if anything there’s more viewing in 2012.  Why?

  • Sports and interactive programming
  • Social media
  • Inertia

 

Of course we don’t know when they view, only when it is consumed locally, which may be by a recorder.  Again, the peak is clearly in “prime time”. 

 

He showed an interesting chart of a particular day which had a huge peak of 10PM-11PM, only about half of it viewed in real time.

 

He showed a curve of popularity versus rank order for content, and of course on a log-log plot it’s a straight line – Zipfs Law.  Over 50% of BT’s traffic on their IP network is video, and it also peaks at 9PM.  The trouble is that they have to design for that peak.  So, the problem is can they shift some of that traffic earlier in the day through caching?

 

(Comment – this is interesting, in the US many subscribers have DVRs provided by the cable company or by a telecom carrier which in fact could be controlled to do this for content that could be delivered “early”, and saved for release at the right time.)

 

A big issue here is of course whether the desired content is available before live broadcast which of course for most things is yes.  Now that content is encrypted and recoverable only on trusted platforms, you could deliver the bits early, then hold them for synchronization with the live broadcast.  (Comment – yes, but I suspect for some things there would be a big opportunity for someone to break the encryption and reveal the program early.  Secondarily, even if only one program a year in the “peak” is live sports (not recordable), you still need the capacity)

 

Another plot looked at what content was live comparing recorded versus viewed in real time.  Almost everything recorded was not live to begin with.  (I’m actually surprised, I’d expect a lot of sporting events to be recorded for later viewing)

 

He introduced a new protocol (actually from someone at PARC, with participation by ALU, BT, Orange, Huawei, and others) for named data.  In this protocol packets are addressed according to the content they contain, and received by people interested in the content, and can be cached at any point in the network.  The content store is essentially a big buffer of an IP flow configured to cache rather than queue. 

 

When someone registers interest in a flow you look to see if it’s in the content store and if so give them the stored content.  Otherwise you see if others are interested and if it’s there, you record the interest, if not you basically start a new recording of interest and ask another server (according to strategy) for the content.  When you get content, you can use it to fulfill requests, or just store it for future interest.

 

He reprinted some information from a paper on this that demonstrates that this CCN architecture is higher performance for any time more than one input is interested in the same content.  (Comment – this is like IP multicasting, with buffering.  It seemed to be a very reasonable approach).  He compared this to multicast, saying that the architecture is as efficient at multicast, but is more efficient in recovering lost information, and much more efficient when the receivers aren’t synchronized or wants to back up or restart.

 

Question:  Did you compare this with Apple’s streaming?  No

 

Question (me):  Do you have data on the worst case day for traffic that is live and cannot be cached?  (He didn’t directly answer that, but said that they have so far been able to keep up with demand for new information.)

 

An Approach to Expose M2M Services over OMA Next Generation Service Interface (Asma Elamangoush, TU Berlin)

 

The paper was a collaboration of authors from TU Berlin and Fraunhofer Institute FOKUS, Context aware systems use context to provide relevant information or services where relevance depends on the users task.  Context might involve a lot of things. (e.g. what the user is doing, where they are, who they are interacting with, etc.)  The work here applied this concept to applications for the Internet of Things.

 

She gave an overview of the internet of things including it’s potential economic impact, and standardization efforts related to it.

The Open MTC platform they are working on is a context management framework.  It builds on the OMI Next Generation Service Interface Effort, and supports those standard interfaces, but in addition incorporates APIs for context awareness.  (Comment – This seems a natural extension of the ongoing  Fraunhofer FOKUS work on platforms for IMS, OMA, and other telecommunications networking standards.

 

The work integrates with the European Future Internet Project (FI-Ware).

 

Hybrid Composition of Telecom and Internet Services: The Telecom Operator Perspective M. Strecca, University of Genoa and Padua

 

He is a previous ICIN speaker and works with TI Labs on this project as well as others at his laboratory. 

 

In the telecom operator view, service composition is generally focused on assembling building blocks representing capabilities.  In a traditional telco world this is typically done in a SDP, which is typically a proprietary solution.  They have previously worked on this and published specifications for an SDP reference architecture and specialized these specifications for asynchronous events and long lived transactions for a telco environment.

 

On the internet there are a lot of web APIs and the building blocks are designed to be composed in a variety of technologies (RSS, Rest, SOAP, etc).  There are many tools to do this.

 

They looked at two dimensions for analyzing service composition – where do the components run, and where are the components.  There are two hybrid cases where services and components exist in different environments (Service in the telco network, components outside or vice versa).  He went through the architectures for In/Out and Out/In cases (the hybrids)  In/Out (where the services run inside the network and access external services like Facebook through APIs) requires more programmer expertise but provides more capabilities.  In/Out has lower latency and allows the Telco to manage the service lifecycle,   In the Out/In case (where the services run outside the network and access network capabilities through exposed APIs) an external service platform provides the management.  This is usually done by a common service platform known to many programmers.  The need to go through APIs for every interaction with the network will increase  latency

 

They developed a prototype building on an existing SDP interfaces to the outside to support the In/Out case

 

Question (Really Comment)  the architecture clearly supports services with a mix of building blocks in and out of the network, isn’t this the interesting case?

 

Question (Dean Bubbley)  Most service innovation happens in the internet, how does this support them?)  (Corrado said that their motivation was from the Telcos perspective so that they deliberately focused on the In/Out case since Parlay and other initiatives took the Out/In case.  (Comment – it’s a valid question, given the faster pace of innovation in the public internet, but I found it interesting that they had taken the other approach and developed a complete architecture with interfaces to allow a service in the network to access components outside.  Perhaps we need to go a step further and make sure that APIs and platforms for building services inside the network like this align closely enough with popular application environments outside the network to make it easier to build applications this way.)

 

Question:  Do internet service providers (e.g. Facebook) terms of service allow this kind of aggregation?  Answer – he thinks they do, but I’m not sure everyone agreed.

 

Question:  What challenges does this present in identity management given multiple identities in the internet?  Answer:  They are aware of the differences, but it’s not entirely clear what the resolution will be.

 

Closing Session

 

Stuart announced that ICIN will continue in 2014 under Noel Crespi as TPC Chair, with dates and location to be announced.  He then indicated that he asked three  people, Bruno Chatras, TPC Chair, and Dean Bubbley, the lead presenter from the WebRTC tutorial, and Roberto Minerva, former TPC chair and long time ICIN supporter to summarize what happened at the conference.

 

Bruno Chatras – Conference summary.

 

He gave some highlights from ICIN over the past 24 years.  It’s all been about increasing network flexibility.  In 1989 endpoints were dumb and all service logic was in the network.  In 2000 the focus was on SIP, with service logic split between the network and the device.  In 2013, service execution is in the device (or endpoints), but still logic and other capabilities exist in the network.

 

  • What’s new in 2013?  Advocated approaches are similar, but the tools are different:  WebRTC and web technologies in general.  New tools for transport (NFG, SDN).  New endpoints (M2M, IOT)
  • New security threats
  • New business models (cooperation with OTT but also car industry, public utilities, transport, etc.
  • Communication technologies as a means to enter new market segments.

 

Dean Bubley – Summary with Focus on Real-Time Communication

 

Stuart asked Dean to provide a summary as a new participant in ICIN and one who would be able to give us some input on focus versus trends in the industry.  He has a long history in working on VoIP and converged networks, and consulting with Telcos.

 

Speakers have divergent views – QoS is/isn’t critical.  Interoperability is/isn’t essential.  IMS will/won’t be core to future services.

 

One accepted trend is that we are seeing fragmentation of voice and messaging, not convergence.  What we disagree on is whether this is good or bad.  He believes it is beneficial (more ability to meet specific circumstance, and more opportunities).

 

Not everything is a call.  NTT had examples (augmented reality, Air stamp, Smart TV experiences, Office in the cloud.

 

Islands of all sizes add value (lots of different services create islands)

 

Are we seeing the death of ubiquity?  In the past 99% of communication was via public network technologies.  Now, you have a whole host of services, each with its own rules for identity and services and interfaces tailored to a particular model.  Do we really want to force all communication into lowest common denominator and then try to enhance them with APIs? 

 

Regular question here – where’s the money?  Answer – it’s quite hard to extract cash from platforms and APIs. 

 

Web RTC is viewed as important – “Almost inevitable.”  Lots of use cases, recognition of near-term complexity but fast speed.  Lots of service opportunities, but there are differences in perspective on important questions (how critical is IMS interoperability?)

 

Roberto Minerva (Conference review)

 

He addressed the issue of network transformation.  There are different paths.  One is based on evolution of traditional architecture.  That was laid down in the 1990s (he showed a set of 1990s mobile phones as an example of what that architecture was designed around).  The traditional path has a focus on preserving QoS, preserving business models and investment.

 

He suggested that the traditional path has new opportunities in new countries.  The traditional path results in traditional thinking when integrating new technologies – it’s about phone calls – same attitude/need for interworking. 

 

Another path – leverage the network for newer services.  (He showed a Parlay X chart from at least a decade ago.)

 

How to exploit the cloud?  Integrate it in IMS?  Move it near the user?  Maybe there are differences by country or by region.

 

There are different paths – add value to the edge, IOT, context aware services with cognitive awareness.  These are all initiatives that aren’t about new technology for the same services and interworking doesn’t enter the picture.

 

A mis-interpretation of the web – our business model for the telephone network based on central control causes us to mis-interpret the web.

 

A question we need to answer – which way to go, what do operators want to do:

  • A traditional Telco?  This is where we have been going, but the path here is clearly downward for most.
  • A bit Carrier? – IMS serves this view.
  • A service provider? There are clearly opportunities and challenges

 

Doing something different will require different thinking.  For example, simply opening up your network and expecting developers to come to you and build services using it isn’t working. 

The Future of ICIN:

 

Cleary the reduced attendance raised questions on the future of the conference.  There were many discussions on this that took place both in the meetings of the Technical Program Committee and the Advisory Board, and during the breaks and networking sessions through the conference.   Like the message delivered in Roberto Minerva’s summary of the industry, it’s clear that doing the same thing will continue to achieve disappointing results, and that to keep the conference viable requires a new approach.  Several options were discussed:

 

  • End the conference with this year.  ICIN had a good run, but Telecom has changed and it is time to move on.  There were supporters of this, but many took the position expressed by Max Michelle, who said it’s too soon to give up.  The conference is still in transition from telecom to next generation services and draws excellent material, and there must be a base of support for it.
  • Align ICIN with another conference.  This had been done successfully in the past with the IEEE IN conference and in one case with an IMS conference, but Stuart  felt there was almost no carry over (i.e. few people came who wouldn’t have come without the alignment.)
  • Refocus as a more commercial conference.  This was discussed without much enthusiasm because the lack of commercial presentations has been a strength of ICIN.  Some commercial conferences are doing well.  They don’t take papers, but instead rely on vendors to show up and present their products and their visions and provide financial support for the conference, a significant departure from current structure of ICIN.  It was also pointed out that many commercial conferences have fared even worse than ICIN in continuing to attract attendees, and there is no guarantee of success.
  • Refocus as a more academic conference.  This is essentially embracing the change in attendance that has been taking place by aggressively recruiting academic presentations and attendees.  It doesn’t significantly change the structure, but requires increasing connections with a new community.

 

The final alternative (become more academically focused) was accepted as the chosen path.  To maintain financial viability it is essential that ICIN find a sponsor willing to supply a venue, Audio/Video equipment, and WiFi, as these things are very expensive to obtain commercially, and splitting that cost over a small number of attendees makes it prohibitively expensive for them.  Fortunately there were several offers for support already.  More difficult will be recruiting a new community of participants for the TPC who have strong academic ties and can actively work to bring in authors and attendees.  This was agreed to but without a specific plan as of the time I left the conference.  (Comment – one thing that makes this attractive is that Telecommunications is I believe a much more common program of study in Europe than in the US, and there is strong support for research in this area from European and international agencies, which results in a large community looking for places to present their work.)  Roberto Minerva said that he works on a conference supported from 6 IEEE societies.  They use the mailing lists for these societies (20,000 members) and get 200 submissions.  Their TPC has lots of university professors (and over 100 people!)  His take is that the TPC is key.  We need people in the TPC who can get papers and attendees.  (Comment – this is essentially saying that we, the current ICIN TPC, are out of  date.  We no longer represent organizations that have large research programs in this area and can as a result get papers from our organizations, friends, and colleagues.  I believe this is the correct diagnosis of the situation.)

 

During the breaks through the conference I had the opportunity to speak with several people about the future of the conference.  Many echoed the comment I expressed above, about how many of those of us loyal supporters of the conference, who in the past were active in research in this area and could bring in papers from our teams are no longer in those positions.  Secondarily, the industry structure has changed.  It used to be that the key decision makers in network operators attended conferences like ICIN, so the vendors of products for those networks were anxious to publish their research and applied research, and as a result supported ICIN and similar conferences in large numbers (Comment – my previous travel to Italy was for the ISS conference, in 1984, which had a similar mission and structure, and at which I presented a paper on how to accommodate packet voice, data, and other services in a single network.  I believe ISS ceased in the 1990s.)

 

Stuart Sharrock, the owner of the conference said that he was willing to continue, provided we could find a way of keeping it economically viable.   He said that in his experience many conferences are struggling and some more than ICIN.  (He noted a conference in Asia that dropped from 500 attendees to 22 in 5 years in spite of a timely focus, and said that by contrast ICIN’s trajectory hasn’t been that bad.)

 

The decision to continue was not popular with all.  The conference unfortunately lost Hagen Hultschz, who had been an active member of the IAB and felt the time had come to move on.  He will certainly be missed by all.

 

Given the discussions we could not announce the next dates and venue, but Noel Crespi was announced as the next TPC Chair, and Noel and Stuart will be left the task of working out the details.