ICIN 2010: Weaving Applications into the Network Fabric

Notes by Warren Montgomery

 

Introduction

This is the 14th ICIN conference and the first held outside of Bordeaux France.  ICIN for anyone who hasn’t encountered it is a conference started in the late 1980s to bring together people involved in the then new area of building programmable networks that could rapidly incorporate new services.  While it began focused on the Intelligent Network standards for telecom networks, beginning in the late 1990s it moved towards addressing the convergence of the internet and telecommunications and is now exclusively focused on how to incorporate services and technologies from the internet world into telecommunications and how telecommunications operators can stay relevant and profit in an era where many services appear rapidly in the public internet and many question the relevance of network based service implementations. 

 

The conference drew about 150 people, roughly half of which represent its loyal following who have showed up many years and half new to this topic.  The change of location may have enabled more people to experience ICIN as Berlin is an easier city to reach, especially for those in Northern Europe.

 

Major Themes

 The major themes that I got from this year’s conference were:

  • Application stores are valuable, but not as a source of revenue.  There has been a lot of buzz about application stores for iPhone and other smart devices.  The downloading of applications does represent a significant opportunity for developers as well as the storekeeper, but the revenues involved are small compared to either what the device makers get for the phones or the carriers get for service subscriptions.  As a result App stores and application downloading will not significantly enhance the operator’s bottom line, but having access to the right devices and applications is important to attract customers who will then sign up for profitable service plans.
  • Service creation is again important.  Service creation was in a way the reason for ICIN, but decreased in importance as it became clear network APIs and service creation environments would not lure swarms of developers.  Web2.0 has revitalized service creation, empowering both developers and end users to create and customize services through simple and powerful interfaces.   As a result I expect users will increasingly see their communication in terms of services that were not created by an equipment vendor or a network operator.
  • Social networking is important, but we don’t yet know how to exploit it.  Much work continues to be done on tying communication services to social networking.  A newer trend this year was using social networking to promote communication services or to build relationships with advertisers.  There were many papers on social networking related topics, but how to use them most effectively in telecommunications is still not clear.
  • IMS has matured and is useful.  IMS was a major theme of the conference in the past.  We still had a session devoted to IMS and others devoted to IMS related extensions (e.g. EPC), but the work has moved from concepts and trials to more routine application.  IMS has become a background against which services are created, rather than a topic for refinement and extension.
  • Identity and Security are key problems needing more attention.  A growing number of papers covered topics in these related areas, and at least one pointed out some critical flaws in current practice.  To reach the full potential of the smart mobile device to become part of business transactions and personal finance will require that the users absolutely trust the device and the network and not find the required authentication and security procedures onerous.  We recognize the need and are working on solutions.
  • Cloud computing is an opportunity for network operators.  Cloud computing, or “X as a Service” is gaining increasing attention.  The opportunity for network operators seems to be to provide the high quality “glue” that binds the pieces of the solution together and binds them to the end user, but it’s not entirely clear what the financial return from such a contribution would be.
  • The expected tsunami of mobile data is here.  A year ago many at ICIN predicted a rapid rise in usage of mobile data from video, downloading and other uses.  Over the past year we have seen that, and it’s still coming.  The result is putting a lot of strain on carriers to keep up.
  • The Internet of things is coming.  We are seeing a rapid explosion of connected endpoints, many of them relatively dumb and many behind gateways that provide firewalls and address translation.  Solving the problems of addressing these devices and sharing them securely is a challenge for which solutions are being developed now.

Complete Notes

 

Aside on the venue 

 

This is the first year that the conference was not held in Bordeaux.  The conference this year was held in two locations, the Park Inn, a modern high rise hotel that was originally built to host dignitaries visiting what wasEast Berlin”, and the city headquarters of Deutsche Telekom.  Both excellent locations for presentations and about a mile walk apart (buses were also provided).  As in France, Internet availability was a challenge here – Hotel internet is very expensive ( About $10 for an hour).  The conference supplied internet and it worked differently in the two venues.  At Deutsche Telekom each user got an individual user name and password to enter, which had to be re-entered every time you connected.  This turned out to be quite annoying as something about the service caused you to become disconnected after 20 minutes or so of use.  (In fact in my case there was no simple way to reconnect once that happened without explicitly disconnecting from the wireless network and then reconnecting to it.)  In the hotel it required entering a simple code (the same for all attendees), and once I did that I stayed connected as long as I wanted, and I wasn’t asked to re-enter the code even when reconnecting a day later in a different meeting room.  As with most other conferences, the real challenge for anyone using a laptop was finding power outlets which were in short supply.  I envy those with devices with long enough battery life to be able to use them all day.

 

The social part of the program included a Gala dinner, traditional at ICIN, this time sponsored by Deutsche Telekom and held in their facility, which was very good.  It seemed somehow appropriate to have a Gala dinner in a building which had once held switching equipment and human operators.  There was also an organ recital by Igor Faynberg of Bell Labs, who proved to be an excellent musician and performed an hour of baroque organ music assisted by Hui-lan Lu in operating the organ stops. 

 

Tutorial – How smart will the future bit pipes be?

 

(I could only attended a little bit of this excellent program due to attending committee meetings.  One change considered for next year is moving the meetings to avoid interfering with any of the program.)

 

This was taught by the Fraunhofer FOKUS organization.  One key message is that the telecom industry “value added” in services  is decreasing as more moves to the internet.  That’s a problem because that’s where the value is and where the revenue will be drawn out.   The 3GPP Evolved Packet Core (EPC) is maybe the last real infrastructure initiative capable of adding value in the network.  IPTV may be an area for operators, but look at Apple and YouTube do “over the top” as a model.  (Comment.  FOKUS had a lot of presentations emphasizing the potential for EPC to be a platform for both evolutions from the mobile network and IMS and from the public internet.  This is not a universally held view, as some feel the internet community will largely ignore it.)

 

IMS exposes APIs, but way too many and too much variety.  The trouble is that most users now have profiles with internet providers (e.g. Facebook) and that’s where the center of services and context for the user resides.  They may use IMS APIs, but IMS is likely to be only a small player in this.

 

One of the problems is the multiplicity of influences on IMS – IN, GSM, Parlay, etc.  Another is the openness of SIP   It doesn’t force users into a particular model.  (Comment – you could also say that of the internet.  Openness creates opportunities the creator of a model didn’t foresee, but it also leads to a lot of redundant solutions)

 

Many people think SIP when they think IMS, but DIAMETER is maybe more important (DIAMETER access Charging, logging, profile data, authentication, authorization, etc.)

 

In the discussion at the break and towards the end of the morning one of the things that became clear was that one thing going on is that many capabilities of IMS wind up being re-invented at higher levels in services that run over the public internet or other infrastructure that does not support them (e.g. roaming, resource management, authentication, etc.)  This is controversial to the telecom community which sees it as wasteful, but perhaps unavoidable. 

Some notes from Technical Program Committee:

Attendance this year is about 150 people.  This is very good, but given the quality of the program it should be able to attract a lot more people.   One of the key challenges is that it is too difficult to get support to attend a conference when you are not presenting and not everyone can present if you are trying to maintain the top quality program.  (Comment – this is indeed the case and a real challenge for the industry.  ICIN is rare in this respect in being quite selective about the papers it takes.  Selectivity has another benefit, which is that it avoids the need to have massively parallel sessions, which means that everyone attending has access to at least half the presentations and all tend to have a similar experience at the conference and common topics for discussion during the breaks.  The downside is that you then must people who are not presenting to come, and many employers are reluctant to support this.)

 

Best papers were selected through a somewhat tedious voting process by the TPC.  (Comment – I believe the major problem was starting too late on this.  I have no objections to our selections, but I think we would have had more careful consideration if the list of nominees (and the actual papers) were distributed at least a week in advance with instructions to return your vote only to the TPC chair with a deadline that would allow votes to be compiled.  Voting in the room consumed valuable time, and what’s worse I think is that I suspect some votes were influenced by hearing who else was voting for what.  We may still want some kind of confirmation of a selection done via “secret” voting, but I think giving people enough time to consider it and make their choices independent of having seen other’s choices would be beneficial in bringing out perhaps more papers.)

 

There were some discussion on next year’s ICIN.  One of the issues is how to get the summary paper out much faster and to strengthen the relationship with the IEEE.    One of the keynote speakers is associated with IEEE Communications and has volunteered to help get it published, but the key is for us to work quickly to get the summary out.  Stuart also mentioned that there is a white paper from the UMTS forum which he wrote that references ICIN heavily, but it was held up for 6 months because of controversy over an unrelated concept.  The key is to act fast! (Comment – the answer is “Just do it”.  After the conference a couple of us took the lead in producing a summary conference report for IEEE Communications)

Keynote Session

Stuart Sharrock (ICIN Events)

 

Stuart opened the keynote session by introducing all the sponsors.  (Comment: ICIN has gone from having lavish corporate sponsorships to relatively lean times, and now has gotten much more support again, though not what was common in the 1990s, the world is different now)  He also gave some information about the size of the conference, the program, and the venues.  Heinrich Arnold from Deutsche Telekom Laboratories talked a bit about their sponsorship.  The location is their Berlin office, and the site of the first telephone call in Europe, as well as near the site of the first “packet switching” (air tube messaging.)  and near the site of the first yellow pages (the directory of 99 fools, participants in an early collaboration). 

 

Heinrich’s statement was that ICIN was the first conference or organization to recognize the value of enabling services – it brings together the opinion leaders in the industry.  (Comment – I would agree.  The decision makers don’t attend, but the people who have the views for the future and shape long term plans behind the scenes all do).

 

He made an interesting example about innovation – it’s not just about creativity.  He showed a bottle of German wine from a maker that is attempting to duplicate a good French wine and outdo the original in quality, which has won several awards.  Innovation can be about perfection of a concept, not just the original concept. 

Max Michel   (France Telecom). 

 

Max is the technical program committee chair this year and introduced the theme and the selection process for the program.  ICIN is a very selective conference, accepting typically about 1/3 of the papers submitted, which are a very strong and selective set to begin with because of the reputation of the conference. 

 

He introduced a video from Malcolm Johnson who is with the ITU and could not attend because he was attending the ITU plenipotentiary conference in Mexico.  There will be  a major revision of standards by the ITU in 2012.  One of the drivers is services and NGN.  One of the most innovative and valuable outputs of work on services has been the development of IPTV.  Going forward there is a huge initiative in global standardization of IPTV to avoid format wars and confusing differences for consumers.

One of his interesting observations was that “Consortia” standards, which dominate this world, tend to be regionally focused without interoperability.  (Comment – I think this was a not so subtle dig at the situation in the US, and maybe the EU as he went on to site a lot of 3rd world and developing world countries which implemented IPTV to ITU standards and the benefits they received from it.)

He made a strong plea for maintaining support for standardization as a way of reducing costs in the long run and improving customer experience and therefore getting loyalty, in an environment that often challenges participation in these activities.  (Comment – I think this was an excellent presentation, and more to the point than the standards keynotes often are, covering the “why” of standardization and the impact of it without getting lost in the details.)

Is the Apple Rotten (Alan Mottram) Alacatel-Lucent

Ironically the talk was delivered by Philipp Kelly, also of ALU, because Alan had become ill because of something he ate.  The talk was about getting to market and things that go wrong, with an analogy of getting fruits and vegetables to market.  “Apples” are the applications, and the telcos and application stores are the channel.

 

Telcos spend 85% of their capex in developing internet capability, but only 15% of the traffic through their networks is telco related.  How do you proceed?”

 

An early answer was Walled Gardens, which confined the user to Telco delivered services.  This was okay as far as they went, but not as effective as what has been done later through application stores.

 

He reported on a survey teens on applications, concluding “the fruit isn’t ripe” – 62% are frustrated with their experience with Web2.0 applications because of bugs – video freezes, long delays, too many passwords, too much redundancy.

 

Consumers buying fruit are willing to pay a premium for fruit free of “blemishes”.  Are application customers going to do the same?  (Comment – food is a health issue, I think the motivation for blemish free fruit is a lot stronger for a perfect recreational IP Video application, but I may be very wrong about where users put their money)

 

“Fruit needs fertilizer  (i.e. money in the application example.)  Are users willing to provide it?  According to their data yes, users are willing to pay for premium applications, mostly based on acquiring the application once for unlimited use rather than paying per-usage.

 

Another lesson – don’t stick users with particular platforms and applications – Fruits are good, but a balanced diet requires variety.

(Bengt Nordstream Northstream)

“No matter what technology is used; your monthly phone bill magically remains about the same size” (Comment – I’d add to this “And your cable bill will go up 10-20% a year!”  That is the challenge.  My phone bill is the same as it was in 1980, my cable bill is probably 5 times as high!  I think that’s typical.  The telecommunications industry has always needed to figure out how to deliver things that customers are willing to pay more for, rather than simply deliver familiar services in a new way, which does not get users to ante up more cash)

 

Economy of scale is important, what offers the best economy of scale?  “One Google and One Nokia, but 800 Mobile phone operators!”  (Comment – yes, that’s a big issue.  As an application builder, would I rather build something that I have to do a deal with each of those 800 operators, or build it on the internet, where I have instant access to everyone who could be a potential custome?  No question in my mind what’s going to win the most attention from developers)

 

(Second Comment – I firmly believe this is a major reason why the technology industry in the US and EU is in trouble.  Historically these have been the big markets for technology, but going forward it seems clear that the largest markets are going to be in Asia.  Technology development naturally belongs close to manufacturing or customers and neither is going to be in the west.)

 

Internet companies can reach the whole world with a few hundred people to support the whole world.  Network operators may claim to be global, but all are regional, even the biggest get only 30-40% of the markets in which they participate, which are not all the markets in the world.

 

You hear about proposals for a “Google Tax” – if internet players get all the money, why can’t we tax them to support the telecom infrastructure that enables it?  The answer is that it won’t work, because the internet players aren’t getting enough money.  Google has ARPU of .08 Euro per month (2008), 225M customers, vs 18.6 Euro for 5.3 million customers for Telia (Sweden).  Revenue to Telia for a Google like service on their customer base is irrelevant.  The issue here is that the internet players don’t earn enough per user to pay anything significant to the operators on a per-user basis.  Google et. al don’t take revenue from the operators.  (Comment – okay, but there’s a big flaw here.  Google doesn’t take revenue from the operators, it is destroying their business model, which was historically based on collecting a lot of money for things that weren’t difficult to do to pay for things that were hard to do but not perceived as valuable – like universal services.  Google is a disruptive player that as things play out is likely to move operators away from the ability to subsidize unprofitable activities with high fees on user services that could be provided much cheaper)

 

He claimed that Operators collect much more money than internet players (80% of the pie) and projected that situation to continue (maybe decline to 70% or so). 

 

The “Golden rule” of markets – number 1 player is 40%, number 2 player is maybe 25, number 3 15%, number 4 10%.  (Comment – this is essentially the “rule of 3” from business school thinking for years).  His claim was that the number one player in the telecom industry was almost always the legacy national carrier (or former monopoly holder in countries without a national player), while others positions could be tied to when they got their spectrum licenses.  “This is an infrastructure business that is all about people buying and building out infrastructure that requires a huge capital outlay”.

 

One of the real issues was the “crazy” spectrum auctions that generated piles of money for government but drained capital out of an industry that had trouble building out that structure.  (Comment – interesting.  The group I was with at Bell Labs in the late 1990 had some people making long term projections and studying those auctions and expressing exactly this concern about the future)

 

“With each technology migration, the market becomes more entrenched and entry thresholds increase”  The view here was that because of the huge infrastructure investment in towers, right away, etc. we are unlikely to see any new entrants for 4G and beyond.  (Comment – an interesting view – this suggests mobile service technologies have been “sustaining” rather than Disruptive.  I really wonder whether a combination of a new technology like WiMax and mesh networking (i.e. end user devices relaying messages) might create a disruptive entry option in which someone can enter the business with a small investment and a very different technology.)

 

Conclusion:  ARPU can be stabilized but costs will not, and costs follow the same growth pattern as traffic.  (Comment – this presume you get no benefit from technology which crams more bits over the same infrastructure.  I don’t know about that.)  The “bit pipe” model has been much maligned, but maybe it is sustainable – just charge customers for what they are using and be of value to them.

Thomas Michael Bohnert (SAP)

“The internet is Broken” (2005 article by David D Clark of MIT)  (Comment – oh yes, Dave Clark was my Ph.D advisor and I had an interesting dialog with him over the paper.  I claim it’s not the internet that’s broken, it’s the bad software monopolies that surround it (i.e. people making bad choices about critical software platforms that were never designed for the environment they now operate in.)  Dave wasn’t quite willing to concede that, but did see my point.  I believe the real problem with that much quoted statement is that it’s way too broad, and people read too much in it to support whatever it is about the internet that their own work or company is trying to “fix”

 

Two big challenges for the future:  A variety of new applications and endpoints, and cloud computing (i.e. putting everything on the network means you have nothing if the internet doesn’t work properly for it.)  The internet architecture is a patchwork with lots of things put in over the year that breaks the design principles.  One of the proposals here is to start over from scratch:  Nets, FIND, GENI – “Clean Slate” internet research. (Comment – the current issue of one of the computing technical magazines has a debate on whether this is the right thing to do – it’s timely)

 

Some say the future is here:  GENI.  (Comment – I wonder.  “Internet 2” was a similar initiative, but what it did was mainly just absorbed into the internet.  I believe opportunities to start over clean are few and far between.  The value you get has to FAR exceed the present to make it attractive to make a disruptive jump.  Fixed phones couldn’t incorporate some of the things now common on mobile phones because doing so would make them cost too much, and users wouldn’t accept the cost increase for some marginal features, but mobility is so valuable users are willing to pay more  and accept a service that isn’t exactly compatible with their expectations from the fixed world.)

 

He had lots of description of work in EU, Japan, and other places on disruptive views of the future of the internet.  What he lead up to was the formation of a common initiative on the future internet in Europe, EFIA, which is poised to lead the evolution of the internet.

 

Common discussion period (Stuart Sharrock leading)

He began with a story about being an academic, getting tenure, and recognizing that it made little difference to him, so when he got an offer to become an editor for Nature he made a move to the commercial world.  One of his first duties was to write a piece for the Times of London on the Nobel Prize in Physics, which was awarded to a friend of his.  He gladly accepted, thinking he had 3 weeks to do it, and was shocked to discover he had to do it in 4 hours.  Are we as an industry facing the same kind of shock, because the industry we are joining is moving much more quickly?  His observation was the 3 visions given by the speakers were all of the “academic” view – take your time and get it right, rather than the need to react with speed driven by competition.

 

Bengt We can’t get around the need for standards, and standards are by definition serious.  He felt standards will always be important, what we need to realize is that it’s not just the operators that will form them.  Stuart responded that we are working on standards for rich communication services, have been for years, and after some time we will be able to replicate what was available from Skype 2 years ago.  (i.e. can we afford to move at the rate we are moving?)  Bengt probably not  – Apple is launching Video service, and are likely to rapidly exceed the subscriber numbers of 3G on Video telephony and claim they invented it. 

 

Stuart asked others about the statement:  “Is what we are doing in standards too long and too complicated to avoid getting bypassed by the more rapidly moving players?”  Thomas – is Skype successful because they just did it, or is it really that they found a solution to allow it to be used behind firewalls and NATs while others just relied on SIP which required constant intervention by users to open up the right ports.  (Comment – this really is support for the theme that to succeed, something has to be simple – it can’t require that the user learn to do anything complicated or that they do anything tedious, including paying too much for it)

 

Philipp asked Stuart if he made that 4 hour deadline – the answer was yes, but only because he had actually worked with the Nobel Prize winner for years and already knew all about him and his work.  He also said that’s not the point – the world that we all came from in the communication industry was based on slow processes that “get it right”, but we are now driven by competitors with a very different way of operating.

 

Phillip – what do I do if I want to have a video conference with someone who doesn’t have a compatible device or application?  (Comment –it’s a problem, but Skype doesn’t care.  One of the key differences is that telecom operators have a small set of customers and a universal service mindset that what they deploy has to work for all of them.  Skype has access to a huge universe of users and doesn’t need to worry about the fact that some of them have old equipment or slow or choked connections and can’t use their application because the universe of people who can use it  is more than large enough)

 

Stuart’s answer was it’s all about regulation – regulators won’t let carriers pass 40% market share, which enforces this division of the market that causes operators to need these slow collaborative processes to do anything.  Heinrich – it’s obvious why operators don’t do Skype – it doesn’t return anything (well not immediately).  One of his points was that operators like to do big things with universal appeal, not lots of little applications.  (Comments --  yes, there is no killer app that you can identify.  Another interesting piece of history.  The old AT&T would license technology to others, but the process was so cumbersome it could not do it cost effectively for less than on the order of a thousand dollars because each sale required significant effort to work out the contract.  That priced it out of the market for most innovative software technologies today)

 

Stuart made an interesting point – 10 years ago location was viewed as a key operator revenue producer, but in fact the operator has been completely bypassed as users have purchased GPS enabled devices that get their location without help from the operator.

 

Question (Chet McQuaide)  -- He was struck by the comment about the ARPU from voice being constant.  He felt this wasn’t the case always, but close.  His piece of history was that when they  introduced a new service 800/Freephone, it created a huge revenue source for operators but didn’t raise any individuals bills.  Are there other opportunities like this?

 

 Phillip – their panel of teen users were supportive of 3rd parties sponsoring premium services.  Bengt – He gave a lot of discussion on advertising and sponsorship as another potential for a new revenue stream.  One problem is that the overall operator revenues dwarf these kinds of returns.  Heinrich – operators have the opportunity to expand through “packaging” if they can put together attractive packages bundling devices, bits, and services they can compete effectively.  That’s not what’s happening though, instead they are trying for exclusive deals (i.e. be the exclusive distributor of iPhone), which unfortunately fragments the market and encourages bypass.

 

Philip – isn’t Google search a sponsored service?  Isn’t that a profitable service?  (someone else noted the Android platform is the same way).

 

Comment (DT person) – There are two problems with the “Google doesn’t steal from telcos” argument.  The first is that telcos individually aren’t as large as reported.  The second is that Google requires resources from Telcos to deliver that service but they aren’t paying for those resources. 

 

Host country Keynotes

The second half of the keynote session presented speakers from the host country (Germany)

Thomas Curran (Deutsche Telekom)

His theme was that they want to become more like a software company.  He went through the typical software company development process, which are focused around APIs, and went through a set of stages to identify needs and deliver applications.  Interestingly enough he used location as an example, talking about the ability to use maps from “the cloud” using mobile data and location from the mobile phone (Comment – interesting since others said the carriers essentially lost this market.  I think in the US stand alone GPS devices with the maps in the device dominate, but the amount of international travel in Europe vs US may make a big difference in the ability to hold enough information in the device and keep it up to date.)

He talked about their “Developer garden” initiative. 

 

He talked about the appearance of innovative voice services (different kinds of routing, call handling etc).  The message being that innovation in voice isn’t over.  Some things he highlighted were language translation, text to speech, and unified messaging.  (Comment – I still don’t get unified messaging.  I get 100+ emails a day and there’s no way I want to see them on a mobile phone or have to wade through an audio interface to deal with them, but I may not be typical)

 

Digital money was another area for innovation – the phone as your wallet.  (Comment – about this time my laptop put up a blue screen.  I had a momentary panic.  I can’t imagine what would happen if my finances were entirely dependent on an electronic device which failed on a trip in a country where I don’t even speak much of the local language.  Are people really ready for this?  It will clearly tax the ability to provide reliability and security).  The telco is a natural player in digital money because of their existing billing relationship with the customer (Comment -- Yes, but there are other competitors for this position)

 

He talked about the trends to digitize automobiles and the home and potential roles for the carrier here.  The carrier is a natural player for entertainment and content in this.

 

Final thought “ Don’t ignore the internet” – he talked about how the mobile industry tried many times to do that in developing walled garden approaches that limited user access – they lost every time. 

 

Sigurd Shuster (Nokia-Siemens Networks)

His keynote was on identity management and Web2.0.  He talked about how communication technologies go from being cutting edge for early adopters to being completely absorbed in our lives.  He gave some examples, with mobile phones being at the “absorbed stage” , and iPads being at the “early adopters” stage.

 

A typical user has multiple aspects of identity for different purposes.  He went through a lot of identities a user might have for business, for communication, for family, etc.  The number one problem the user has with this is too many user IDs and passwords.  (Comment – right on)  Secondarily there was distrust – to use something the user discloses information and that carries risk.  The need to disclose personal information to multiple sources and especially to services the user does not yet trust is a major disincentive to use.  He had an interesting update to the old cartoon showing that “on the internet nobody knows you are a dog”.  Now people you deal with on the internet are sharp enough to figure out you are a dog, what your breed is, and sell you specialized dog food.  Big brother is watching.

 

80% of users think privacy is very important and 76% are concerned about mis-use of their private data.

 

Who do customers trust.  Curiously enough Banks still score the highest (even with recent bank and financial scandals), but the Mobile ISP scores very high along with Google and on-line communities.  His view was customers trust telcos because they pay them.  (Interesting, I suppose the feeling is that users won’t do business with someone they don’t trust but they might participate in anonymous transactions with someone untrustworthy.  There was a question in another session about this view in that the feeling is that trust of the mobile operator may be regional – in the US we don’t have high trust for them)

 

He talked about what it took for operators to build a circle of trust including brokering identity, tracking usage to predict what the user wants to do and help them, and then personalizing the users experience around what they want to do and how they want to use the devices.

 

Can telcos do this.?  It requires speed, and they have regulation to deal with as well as a variety of capabilities.  Standards take way to long for this world.  The bottom line is that it’s a competitive world and users will decide what they want.

Felix Zhang (Huawei)

The digital information era is coming (or already here).  Information is now critical to many things that don’t seem obvious, like farming or manufacturing.  The network can bring information, but can expand to bring knowledge and even Wisdom. (i.e. getting expert help from the network.  Cloud computing is creating a new opportunity for the network.  One view is that the migration of services to the cloud and to the terminal leaves the network out of it, but instead the reality is that the network has the opportunity to add significant value as an intermediary by supporting the ability to deliver (distribution, finding customers, brokering access, etc.)

 

The digital era is creating a lot of surprises.  One he showed was prediction vs actual in usage of the Apple store – reality was 13X prediction.

 

Question session

 

Stuart Sharrock made some comments – The presentations were mainly focused on Tier 1 carrier needs.  What’s the reaction from second tier players.

 

Thomas – his view was second tier players could not support everything, and would outsource things that had economy of scale, mainly their basic network operations, to the tier 1 players who had economy of scale there, and focus on value added service.  (Comment – interesting.  In the US, almost the opposite is happening.  The tier 1 carriers have been shedding basic service in outlying areas (rural or just separated from their core service areas) to help them focus).  An opposing view was presented that the smaller players had lower overhead and could do the basic services cheap.  There was another view from Sigurd that there may be an opportunity for small companies to become specialists in needs especially if they aren’t as tightly regulated (i.e. Boutique telcos).  Stuart indicated that small companies have regulatory requirements that mean they have just as much overhead as the big players.  No consensus was really achieved.  (Comment – In retail you can often buy something from a discount retailer, a full service department store, or a boutique specialty retailer.  The variation in what you receive is a lot less than the variation in how you get it (i.e. how much help do you get in choosing, can you get it altered to fit, etc.), which I think is what people pay for when they pay more than discount.  Perhaps the same model applies in Telecom services?)

 

There were some more questions about what a software company and what an aggregator really looked like without definitive answers.

 

Overview of Poster and Demonstration Session (Warren Montgomery, Insight Research).

This year I was given half an hour to give the highlights and advertise the poster/demo session.  I think this was a useful thing to do and I was glad to have the opportunity.  We had an excellent set of presentations this year and with a reception Tuesday Evening in the poster/demo area much more time for people to interact with the presenters.

 

Session 1B App Stores and Service Providers (Steffan Uellner T-Systems Chair)

 

This was the first session of the morning on Tuesday.  After an excellent networking reception Monday evening some excitement was needed, and the session delivered it.

App Store Strategies for Service Providers (Kristofer Kimbler, K2K Interactive)

Kris Kimbler – Kris was the founder of Appium, one of the successful application development companies spanning the IN and IN+Parlay era, and an outspoken speaker at ICIN events for years.

 

Today, there are about 80 app stores, 300K applications, 20K new applications per month, 60K developers, 6 Billion downloads and $2Billion in revenue (Estimated by the speaker).

 

Apple is really dominant (225K.  Android is 30K and growing (It’s the fastest growing by percentage, not in absolute numbers.)_  Apple’s success is a combination of a good platform and an eco-system.  One big factor is that the appstore client comes pre-loaded

in the devices, something others didn’t do.  (Comment – I think the existence of  iTunes and the huge number of people familiar with it and with accounts to start with had a lot to do with it.)  Apple operates on a  revenue sharing model:  70% for the developer, 30% for the app store.  This is a radical departure from what happened in previous technologies where application developer got some fixed fee and the operator got any upside if the application proved popular.  The revenue sharing model really jump started the app store process.

 

Some interesting things;  The App store lagged the iPhone by over a year – Steve Jobs was against it and lost, but debate in Apple delayed its introduction.  (He has a reference in the paper that covers the history and Steve Job’s role in particular).  Apple has the lion’s share of developers (45K vs 10+K for Android).  There is currently very little overlap among developer communities for platforms, and those who do multiple platforms probably represent the big gaming companies and others with lots of money.

 

While iPhone has a lot of hype, all smart phones together will only be 28% of the mobile handset market in 2013.  Apple is only a small fraction of that.  (Android is projected as larger than Apple by then). 

 

The reality of application stores:  15% of the smartPhone owners download regularly, and games out number communications applications.  (Other speakers had statistics showing that a huge number of those applications are little variations on simple things and that the biggest categories of applications are “flashlight”, and “address book”.)

 

He has lots of data on costs of applications: with Android 60% are free.  Most are less than $2.  The average price is $4.  Figuring the Apple numbers, the total amount available to developers is about $1B, but most goes to the top few companies.  Most will not get enough from the application stores to cover costs, but they develop anyway.

 

The business case for Apple is that they get half a billion from the downloads, but their hardware sales are $25Billion.  (i.e. applications are important, but mainly they are important because they sell the hardware, which is 98% of the revenue.)

 

Application developers are now hard to attract, everyone has a store, and there are too many choices.  Money isn’t enough to attract developers, you have to offer more:  Maybe the store  can help with advertising.  Maybe the store offers some kind of celebrity affinity “coolness” for developers. 

 

The most probable model is that most operators won’t have an app store, they will let customers download from the big stores.  An intermediate option for operators  is to aggregate (i.e. one store serves multiple operators), where the operator may get only 10% of the revenue (because the store operator gets a large share).

 

The average iPhone spends $25/year on apps.  But ARPU in Europe is $30/Month.  50 Million users pay $18B/year for service, but the 4% of users who are regular downloaders will only generate 50M in revenue (Again, the application revenue isn’t significant.

 

The conclusion here is app stores won’t generate much revenue for the carrier.  They are really there to help sell phones.  That’s true whether you are a carrier or a handset vendor.  There’s no business case for a carrier based application store other than “Pride”. 

 

AppStores Friend of Foe (Rebecca Copeland)

Rebecca is another familiar speaker in ICIN, now an independent consultant.

She gave a lot of the same figures as Kris, but had a very different focus on it – look at all that money going to developers and Apple.

 

She described her own experience – user overload – she wanted a currency conversion application, went shopping and found hundreds – what should I pick?  On the positive side she said a big factor here is that the app store has a micro-payment model – it’s good that the apps are the price of a cup of coffee, because that’s the level that people will actually pay. 

 

She described a few applications – many combined the camera with communication and hid the use of MMS to upload information from the user, making things very easy:  Buy houses, contact your garage, report graffiti, etc.  Another one automatically reported bumps in the road to the transportation department.  Lots of these were linked to other companies or government agencies (i.e. the user may get the app for free because using it benefits someone else!)

 

Lots of these are regional in focus.  The app is only relevant for a small group of users (That’s part of why the fragmentation of applications)

 

Lots of apps essentially replace advertising and one big benefit is “click to connect, which avoids the user getting lost trying to find the right 800/freephone number to call to purchase)

 

One thing about stores she said is that there is a difference between who puts applications on different stores – 4% of apps on ATT are games vs 56% on Apple.

 

Many applications are small, simple, one-shot use – (Interesting – adapted to what I term “the short attention span generation”)  Applications are disposable.  This is an alien model to a telecom world which looks at long term subscriptions, contracts, and 40 year lifetimes. 

 

More interesting facts from the user:  US users spend 79 minutes/day using applications, vs only 12 minutes on older phones.  The amount spent per transaction is below what is acceptable for credit cards.  Users don’t expect support, but they want 24hour no penalty return.

 

What are they spending 79 minutes on?  Monitoring – watch what your pets (or children) are doing at home.  (Comment – I’ve read about things that generate Twitter IDs for pets and periodically post tweets from them.  I wonder how much of this is just a pet rock style fad, but I suppose that’s not important)

 

Lots of developers are going towards Android – the message is developers are fickle – they go to where the biggest market is in their view.  Only 1% of applications do “useful” things. 

 

She had some interesting statistics on application stores from different vendors – there’s almost a price war going on from the manufacturers and the app stores are offering many applications free just to entice buyers.

 

Application stores represent a threat to the carrier because the relationship with the application store means the store knows more about the end user than the carrier.  There is a lot of concern about “Over the top” applications cannibalizing carrier revenue, but it may not be a big threat because in reality those calls will cost more (i.e. Skype over mobile data costs more than a phone call especially when roaming).

 

Her bottom line was that application stores are synergistic with carriers – friends they have to have.

 

Question:  (SAP person) – who in the audience has developed a mobile phone application? (maybe 5 hands out of 50 people)  He went on to talk about lots of problems in doing this – lots of overlap with other applications, issues learning and using the language and tools, etc.)  Rebecca made the point that building apps isn’t new, what is new is the open structure which allows many overlapping applications to get to the market and the market to decide who is best.  (Comment – I wonder how users make those choices, the number of choices must be overwhelming.)

 

Simon Becot (Orange Labs) Empowering Telcom Operator Convergence through a common marketplace.

 

This presentation covered the servery CELTIC project which creates a common market for northern Europe for multi-media to mobile and broadband users.

 

Servery seems to be aimed at supporting the whole cycle, from development to sales, with graphical tools for the developers to use network assets, browsing tools for customers to help them sort out which application applies, and sales and revenue sharing support.  (Comment – there was such a contrast in style between this presentation and the first two presenters  that it was difficult to follow, but this work definitely seems to be in a very different mold of the operator controlling the whole process, not the typical internet model.)

 

The rest of this presentation went through the details of how development is done and applications are delivered through a combination of IMS and Web2.0.  This was in my view a very conventional discussion of carriers opening up their networks and exposing assets through APIs. 

 

Patricia Hargil (Alcatel-Lucent)  Application Explosion:  What is the right business model

 

The issue isn’t whether or not to have an app store, the app store is just a mechanism, it’s how the user finds the application.  She was much more interested in what was the business model behind the creation of the application and sale of it.  App stores are the tip of the iceberg.

 

She gave a lot of figures on usage.  In 2015 over $1Trilion in mobile content, $13 billion in location services in 2013.  Big growth of social media, etc.

 

That’s not the problem though, she showed growth curves that show 3% growth in voice revenues (if you are lucky enough to be in a growing country).  Data revenues grow at about 13%, BUT usage is growing at 131%.  Therefore there’s a disconnect.  (Comment – this is interesting, because for years there was a view that we couldn’t sell enough mobile data to tax the infrastructure that was already there, but in the past 2 years we have moved way beyond that to operators getting a black eye (e.g. AT&T in NYC) over being unable to keep up with the tidal wave of mobile data demand.)

 

The internet defines new business models and “disintermediates” the relationship between the customer and the producer.  (Comment – I recall a decade ago getting that word in a presentation I had to give for someone and not knowing at all what they were talking about.  It now seems a natural concept that the internet eliminates the middleman, and in many cases our industry was the middleman)

 

Network providers have become very aware of application stores and are very much looking at the application store model as a way of getting new revenue. 

 

What do developers look for:  Mostly the number of potential subscribers, then finances, fame, lots of other things (The paper has the full number).  (Comment – that’s the real challenge for the carrier, independent stores may reach more potential customers)

 

Fragmentation is a big problem – too many devices, too many incompatibilities with carriers, too many different restrictions on usage of capabilities, etc.)  (Comment – but remember the view that most developers don’t deal with multiple devices at all!)

 

They concluded there wasn’t necessarily a right business model and looked at >120 different ones.  It came down to 6 categories:

 

  • Network provider led
  • Aggregator led
  • Platform centric (wholesale)
  • Enterprise  supplied
  • Trusted partner
  • Internet players.

 

(She had a half dozen or so logos in each box indicating who played there).  All can be profitable.

 

She went through some characteristics in a matrix describing who was responsible for what, (again much more is in the paper.)  One point was that the network provider led model, while popular, has the highest cost for the carrier because it puts a lot of work on them.  It does give them differentiation, but at high cost.  Aggregator can give many of the same benefits, but by sharing the work across more players the aggregator can reduce costs (Comment – might also be more attractive to developers who don’t have to work with a lot of small network operators just to reach their networks)  Many companies really underestimate the cost of supporting a developer community. 

 

She went through a few more characteristics of those two models as examples. 

 

The bottom line was that what the right model is for any given carrier depends on their business goals – increase revenue, save costs, grow brand, etc.

 

Questions for panel

 

Roberto Minerva (Telecom Italia) – not sure he shares the view – app stores from Apple and Android are overlays, they don’t care about connectivity or the network.  They are totally focused on the terminal and network and they will build whatever they need over the top.  To do this the actually create a rather closed world – there’s an Apple or Android way to do anything that is specific to the platform.  (Comment – yes, I think that’s right, they don’t tend to use anything from the operator except at arms length)  Operators instead have to spend a lot of effort on infrastructure and interworking.  They spend lots of time on creating APIs for interworking.  Should operators emulate Apple?

 

Rebecca – multiple APIs to deal with.  Even Apple has multiple versions.  The network operator needs to be more focused on APIs for their capabilities and making them available to developers to use in new ways.

 

Session 2A – Content Delivery (Michael Crowther, BT)

Unfortunately the first speaker from this session who was from the Berlin Institute of Technology talking about a research project, took ill and could not present.  The two remaining presentations included one on efficiencies in mobile content delivery and another on the challenge of synchronization in IPTV for social services.

 

Arnab Dey (Wipro Technologies) Efficient operational models for mobile content delivery

Wipro is a software developer in India who has been participating in ICIN for several years. 

 

He gave some statistics of mobile usage in India, showing average growth of over 50% a year for value added services (to him this was things like WAP, SMS, USSD, etc. – ie 2.5G mobile data technologies.)  610 million mobile subscribers as of April 2010.  (Comment – in work done for a different project I got some data on the Indian market for mobile data – It’s large, but affordability is low and as a result much of the market is focused on getting basic services out there, not true broadband.  The situation is expected to evolve, but it will take time.)

 

What are the challenges?  One big one is the variety of devices and the rate of introduction as well as the growth of content – lots of overhead to support everything.

 

There are 6,000 handset types, but only 70 types do 90% of the download, the others are mainly just overhead to the content providers.  A typical content provider has to provide 80 different formats for things like “wallpaper  (Comment – interesting, I wonder to what extent this is an artifact of the second generation technology – i.e. will things become easier with smartphones where the phone can do more of the work of customizing content to the capabilities?)

 

The focus of their work was solving this problem – reducing the number of unique combinations to be supported and introducing a way of producing a best fit mapping that made things easier for the content provider.  The result decouples the content providers from the variety of devices.

 

Their framework uses a rules-based mapping which scans incoming content and produces a small number of variations in the content management system, then a second rule engine drives the customization of content to deliver to the phone via WAP.    Their research was to determine the optimum number of variants to store and manage to allow the right information to be generated for the phone when needed. 

 

There are some real problems – Copyright is an issue since the operator may not be entitled to manipulate the content, which prevents dynamically transforming content to fit a particular device type.  Another problem is  that the need for online transcoding might be bursty (i.e. if some popular content gets downloaded broadly the delivery system may need to reformat it for all the “unpopular” devices to serve the burst.)  This was not the case in their experience.

 

Question (Me) – does the problem get worse or better as the market migrates to smarter phones.  Answer – if the phone can do the transcoding and take more on itself that relieves the carrier from it, but that’s a long time off in a developing market like India, and in particular they are still expanding GPRS/WAP capabilities into second and third tier cities where affordability of mobile services is tight and this solution is an excellent fit to deliver content at the lowest costs there.

 

Question – how does the system know what device is being used?  (It’s in one of the fields in a WAP request).

 

Social TV needs Group Sync (Hans Stokkiing, TNO)

Or “A network-Based Synchronization Approach standardized by ETSI TISPAN”

 

For now, TV is an individual experience, but many find it unsatisfactory.  Social TV is a way of sharing the viewing experience with friends in other locations. (Comment – I’ve seen presentations on this concept for years, even done work on this myself, but it’s not here yet.)  He made the same point – it’s not here yet, though the capability exists to use Skype for communication while watching TV or share watching a YouTube video.

 

Social TV introduces the need for group synchronization – the problem with group watching is that different viewers may see the same thing at different times due to differential delays in the network.  Interestingly enough he showed a graph of delay versus delivery technique for real time video.  IPTV or satellite generally deliver content before broadcast TV, while every other medium is behind broadcast.  (Comment – Yes, this is a real problem, but the largest differential delays shown in his chart are on the order of 2-3  seconds, much shorter than it takes the user to communicate with friends.  On the other hand I wonder about the situation where at least one participant  doesn’t have the bandwidth for continuous video and continuously falls behind the others.  That’s where I think there is a real problem but I’m not sure how to solve it.)

 

He talked about various approaches to group synchronization: 

 

  • The Maestro scheme basically uses a central synchronization server to communicate with all the clients and determine how to delay the stream at the “fast” terminals to keep in synchronization with the slower ones.
  • The Peer-to-peer approach relies on communication among endpoints to identify who is ahead and who is behind and introduce delay to synchronize all.  This can require lots of work in the end user devices and is likely to be less precise.
  • A third network approach uses intelligence in the access link to communicate with a central synchronization server and delay.  This has the advantage that it doesn’t require programming the end points and can easily be bypassed when changing channel (which is a problem with some other approaches.).  It also does not account for delay between the endpoint and the access device. (Comment – this is of course where the problem is likely to be if one participant has a slow end link or computer) 

 

All 3 are specified by ETSI-TISPAN IPTV specs. 

He showed a reference diagram with protocols in SIP to do all the needed work.  He gave ETSI standard numbers, reference points and information on how all this can be done. 

 

Question:  Kris – is this a real problem?  Consider all the effort spend on Voice Call Continuity to handover calls from WiFi to mobile – the real need for this turned out to be quite rare in comparison to all the effort spent.  (Comment – yes, when I was with Personeta, they claimed correctly as it turned out that the need for this was quite small in comparison to the need for having the same service on multiple devices and the ability for customers to hand calls between devices explicitly)  Answer – he’s not sure how much the real need is.  Nobody knows whether anyone has implemented the standard.

 

Question – does this solution work when end user uses different Codecs?  Answer – yes, this is probably the situation where it is needed because one user may be much slower because of transcoding needs.

 

Question (Deutsche Telekom person)  There is a process to synchronize the decoding/presentation to a global clock, what does this do in addition.  Answer – He didn’t what process the question referred to.  (Comment – I think from the paper they considered the ability to synchronize without a global clock a requirement.)

 

Question (Chet McQuaide) for first speaker – how do you define “best fit?”  (In an aside he mentioned that some content providers (e.g. Disney) have standards for what’s the minimum acceptable way to deliver their content and “best fit” might not meet that).

The answer was that they defined it ad-hoc based on user feedback.  It turns out users aren’t fussy – the main issue is can the user get the content downloaded and is the image good enough in the eyes of the user.

 

Question (Michael Crowther) – is the real problem for synchronization with users connected over different technologies?  (Answer – yes, this is a bigger problem.)

 

Question (me) – what if one of the users won’t keep up in real time?, what do you do? ) Answer – we have to support this, but it’s not clear how. 

 

Question (NSN person)   Did you consider how to structure content storage (i.e. where to put it).  Answer – to this point the focus has been on the SDP, which is up in the service layer and centralized.  There are efforts to address the distribution of content delivery functions in an optimal way, not covered by this project.

 

Question (Anders Lundquist) – we seem to be going away from real time except for sports and towards video on demand.  How does that trend effect this service?  Answer, this kind of thing is happening ad hoc today without benefit of standards.  Anders was looking for an answer to what happens when 2 people decide to watch something on demand and are out of sync.  The answer was that the synchronization they are looking at was on the order of 10 seconds, beyond that it’s more like what happens when someone comes in on the middle of a movie playback and the two decide together whether to restart the movie to watch together or have the first one just join in progress.

 

Another comment – this is a good first step, there are clearly lots of additional needs for synchronization.

 

Another comment from a FOKUS person – the synchronization issues are much tougher in Gaming.  Would this solution apply here, or is there something from Gaming that should be applied to TV?  Answer – Gaming is a very different situation, it won’t tolerate gaps and is very real time oriented.  Good gamers adapt their behavior to delays in the network (e.g. firing ahead of getting in range)  You don’t want the system to introduce variable delay that will frustrate that.

 

Special Presentation: Huawei

 

As the sponsor of Lunch on Monday Huawei made a presentation focused on their research and development centers in Europe.  In 2009 they had $30 Billion in revenue (Contract Revenue, if that’s different from total, I’m not sure).  They have been growing at 30% a year.  87,500 employees in 2009.  (Comment – this is about the size of Lucent in the late 1990s, probably comparable to the other major network vendors at that time, most of which have experienced major declines since then)

 

Huawei has 40,000 employees in “R&D”, with 10% of revenue funding R&D.  They have 22 R&D centers around the world, 4 in the US, and 4 in Europe (Though in the details that follow they apparently have more than that).  72% of revenue is outside of China, and that fraction is growing.  The speaker overviewed 4 R&D centers in central Europe (Munich, Berlin, Brussels, and Milan).  The focus is standards, “high research”, and applied research.  No development is preformed in Europe.

Session 3A Rebecca Copeland (Independent/Huawei).

Implementation and Evaluation of a Collaborative Content Play Method in a Home Network Kazuyki Tasaka, KIDDI R&D Labs

 

KDDI is a major communication company in Japan.  A use Case scenario:  Playing music to suit the mood in displaying  photos on the TV.  In the scenario the user picked photos via the mobile phone, then played music and displayed the photos over TV.  All the devices need to be coordinated by an application to achieve this result.

 

He talked a bit about the communication need for the phone to talk to the content server in the home and pick the right things to play and then control.  He then went into an architecture to address it and several functions that were needed:

  • Association of main and sub content.  (e.g. making up a playlist), which might be multi-media and organized in various ways (e.g. by theme, by date, etc.)
  • Automatic Collaborative content playing (set up playing on multiple devices automatically).
  • Automatic change of connection (i.e. move playing to another device at some point).

 

They did an experiment involving the need to start by establishing start times for playing and then have to change devices along the way.  All this seemed to be done with HTTP as a control.  It took about 1 second to set up the play time and 2.4 seconds to change device time.  (Comment – because of some language troubles and a lot of detail in the slides I’m not positive, but this seems very slow to me.)

 

His conclusion was that the times were acceptable and you could set up to do this.  (Comment – this seemed like a lot of effort for a rather odd service.  Maybe this is more natural in Japan but doesn’t translate well, or maybe they are just ahead of the rest of us in how they use communication technology and devices.)

Paulo De Lutils (Telecom Italia) Managing Home Networks Security Challenges

 

Home networks supply a lot of social needs for people (games, home automation, security and communication.  A key aspect of home networking is connection to the global NGN/Internet.  It’s a door to the home.

 

ETSI/TiSpan considers a home network to be like a Customer Premises Network and a part of the NGN from a standards perspective.  It has a gateway (CNG) and multiple devices (CNDs) behind it. 

 

Why is security important and who cares?  It’s important to the customer because it protects their privacy, prevents fraud and especially protects children from unwholesome influences.  For the provider it protects the NGN from abuse and fraud, and protects their ability to meet regulatory requirements.

 

Home Networking  is complex – many devices, many protocols, many services, regulation, network address translation, etc. They used a Threat/Vulnerability Analysis to look at the requirements and potential solution.

 

The main threats include:

  • Denial of service – it’s a dangerous threat because the connection is “always on” which both increases the window for exposure to attack and makes the home devices an attractive target because if compromised they can launch an attack easily.
  • Social Threats (SPAM, Phishing, Botnets to spread malware, SPIT, etc.)
  • Eavesdropping (interception, unprotected WiFi, or poor cryptography, especially in use by fixed devices.)
  • Masquerade:  someone pretends to be another element to access a service or a device.

 

The countermeasures to these threats were enumerated by TISPAN, with the focus on putting the burden of security on the Customer Network Gateway.  This isn’t perfect because it creates problems for some services like remote administration.

  • Firewalling
  • Network Access control
  • Preventing unsolicited communication (i.e. blocking incoming messages, but again that requires some management by the user to exempt things they want to hear.
  • NAT – IPV4 is still in place and essential and requires NAT, so a standard is needed to traverse address mapping.

 

The conclusion here was that there are many issues and the network operator can help customers protect their home network.  (Comment – I think the real problem is this stuff is very complex for the customer who is looking for simplicity.  I don’t honestly know how most non-technical people manage to set up home networks given the amount of effort required.  Most ready made solutions assume you start from scratch and have a single vendor, but by now most real users have a patchwork of solutions and have to get them to work together.)

 

Mohamed Mahdi, New UPnP Service for Multimedia Remote Sharing with IMS Framework

 

The context to this work includes point to point services that need to get at home devices behind the home gateway.  Prior work includes work that uses web based solutions (big security problem and each service has a unique architecture and services).  Solving the problem at the Home gateway has ben a problem.

 

Their proposed solution allows a service to share multimedia contents between terminals connected to different home networks.  It’s transparent in a commercial terminal, and it’s secure and preserves QoS.

 

Their solution adds a remote access server in the network IMS that manages access rights between home networks (i.e. defines who can access which network).  In the home gateway there is a remote agent (A Sip UA that is accessible to the outside that can respond to requests), and the Virtual Media Server, which is there to stand for media services behind the gateway and deliver the requested information outside.  The solution follows basic procedures to establish a kind of Virtual Private Network between the using location and the remote home that they are accessing, with the remote agent and VMS completing the connection between the locations and acting as intermediaries to relay information.

 

He went through a long scenario of browsing a media catalog in another home and playing of selected media.  They prototyped this using OpenIMS as the core network, and building the two new pieces (RA and VMS) to demonstrate feasibility.

 

General Questions:

Will the UPnP service work anywhere or must they use your gateway?  Answer – it is workable anywhere, but you have to have the components on the gateways you want to use.

 

Another audience member asked Mr. De Lutiis about another ETSI project on home security and how it related.  The bottom line is that there is no perfect solution, what they present is a framework, but it is up to the network operator to select the best framework for their particular needs and network.

 

Another question was basically to the whole panel on how the telecommunication operator gets revenue for these activities?  (I.e. this is nice work, but will users pay for extra security, UPnP, network control, etc.)  Rebecca’s answer was that anything they do which increases usage has the potential to generate revenue.  Mr. Mahdi’s response was that his service allows they operator to deliver a premium service which is more valuable to the end user.  The architecture potentially applies to both home and commercial applications, including home automation and VPN “work at home” situations.  The response for Paolo De Lutiis was that security is a key need, and while now end users might not pay for it, in the long run he expects they will because they will make more and more use of the network including some valuable financial things.  (Comment personally I am a luddite when it comes to on-line finance, because I KNOW how insecure the underlying machinery is and as long as I can avoid opening on-line access to financial accounts I will do so to avoid compromise, but that’s not the way everyone operates.  Specifically, one of the interesting comments at lunch was that eBanking is very popular in Pakistan, even though most people don’t have their own mobile device for access, they have to borrow someone else’s.  For now apparently social pressure prevents obvious abuse.)

 

Rebecca tried to get the KDDI speaker to address the question of whether users would pay for coordinated delivery of the sort he overviewed.  I don’t think he knew.  He said this would be part of the services provided by the operator (Maybe what he means is it is used to differentiate the operators offering.)

 

Hans Stokking made the comment from the audience that all these things help differentiate the offerings of the operator from competitors and help stimulate more usage.  Someone suggested that these kinds of things might be subject to “Viral Marketing” popularizing these services, and that all of this might become a marketing plus.  (Comment – true enough, but I think this really is rather weak.  Ultimately I think you need to decide if people will really pay for something.  If they won’t, then it doesn’t matter much whether you provide it or they download it free off the internet.)

 

An Orange speaker pointed out that they already have people willing to pay an extra charge to get a security suite through the operator, this isn’t something new, there is demonstrated willingness to pay.  (Comment – I think this whole area is potentially ripe for support from service providers because I suspect the largest complaint from users about buying security software is managing the constant need to update it and an awkward renewal process with the vendor.  Collaboration with a network operator might result in a solution that is a bit more user friendly both in how it’s updated and paid for.)

 

Rebecca commented that the coordinated media service would really be a natural candidate for use in marketing.

 

There was some discussion on other uses, like using coordination to improve remote playback of events for parents, or media sharing in conjunction with social networking.

Session 4B – Social Networking (Kristofer Kimbler)

Kris now represents his own social networking company, Azouk networks.  The key question he posed for the session was: How should service providers relate to social networks?  Are they a threat?  Can they be turned to benefit the operator?

Enhancing Social Communication Between Groups (Tim Stevens, BT)

The case for rich media – used to be that people would watch TV or see movies independently and then converse about them over the fence or around the water cooler.  Lots of technology now supports communication, but it hasn’t so far replaced the informal socialization. 

 

He talked about a project (TA2 or “tattoo”) they were involved in that had a lot of devices and services to allow people to interact with each other that they have been using to investigate scenarios for services.

 

What are the requirements?  At the low level:

  • Low delay encoding and transmission
  • Video and audio rendering
  • Capture and Reproduction
  • Data Analysis.

 

Next up is service management  and multimedia composition (managing who is in, who is out, what the media are associated with various flows, then on top Orchestration.

 

They have four rooms in Antwerp and Ipswich to use in experiments.  They are wired for 4 microphones for surround sound capture, they have one HD camera that can be edited to create multiple views, plus two side cameras of low resolution.  Also 42 inch displays, touch pads for games, etc., all connected via Gigabit Ethernet.

 

He presented a lots more detail about how the whole test setup worked.  (Comment.  I’ve seen lots of setups like this, from the MIT Media Lab to conferencing testbeds in Bell Labs.  Unfortunately most of the stuff built in the ones I’ve seen has been technically interesting but has not led to cost effective services.)

 

He showed a short video clip on a demo of their system.  One interesting thing was that they used the audio localization of speaker to pick the view being transmitted.  The narrator claimed this was the first such system to do so, however, I know the crude video conferencing system used within the old Bell System in the early 1980s had a similar function, which had a set of microphones at the conference table and used which one picked up the strongest signal to pick which camera view got transmitted. (Again, an interesting capability but unclear what real value it provided.  The one I experienced was error prone in that the microphones picked up every noise in the room (including ones you didn’t want anyone to notice) and often showed the wrong view.) 

 

He covered a lot of work in progress including building a more lightweight implementation, reducing video delay, and going to multi-way communication, and figuring out how much bandwidth they really need, since up until now they have simply used 40 megabits.

 

Question (Dave Ludlam) – when will this become practical?  Answer – no idea, the focus of the research is sociology, not technology.

Word of Mouth Mobile Marketing Real World Recommendations (Mathias Will, T Systems)

This work is part of T-Systems mobile internet Competence center, which has done work on business and technology consulting, mobile applications, service platforms, and operations and support.  The focus of this work was addressing how retailers can get the word out on their offerings in an environment which is squeezing out budgets for traditional advertising.

 

Some of the opportunities here are to take advantage of the capabilities of smartphones (GPS, positioning, lots of processing, good display).  They are  not just trying to build another application.  He gave an example of car companies offering a buying app – it’s a one time deal, the user might download it while car shopping but it’s not useful other times. 

 

One observation – marketing messages mean more when delivered by trusted friends or family, so the real goal is to figure out how to get users to share them with their friends.

 

Some things that work:  Loyalty cards, mobile coupons, search and navigation, entertainment, and things that build community.

 

He saw a lot of focus on things that helped companies retain customers – the internet has made it way too easy for customers to share their experiences and as a result failing to satisfy one customer may result in loss of many sales.

 

Another opportunity is scanning bar codes with a mobile device to get more information on the product.  (Comment – yes, these are becoming much more common, as are bar codes in publications and ads.  It’s funny that over a decade ago a device to do this, the cueCat was distributed to millions of people free, but nobody could be motivated to use it.)

 

He talked about using social networking to share ads and coupons with friends which let to better delivery. 

 

He presented some information on a demonstration he called the Bargain Hunter Application.  This was a bit odd.  The user could look for coupons, but before getting the coupon the user had to solve a small riddle to prove interest.  It allowed users to share coupons via social networking.  (Comment – I don’t understand the riddle part – I would think that any barrier to getting the coupon would be viewed as an obstacle and cause users to give up.  Maybe most people aren’t quite as hassle averse, but I personally have a long list of companies I will never do business with because I don’t like the way they advertise or because they are simply difficult to deal with.   That’s a hard stance, but in those cases there are always way too many choices, and avoiding ones that cause me hassle is one way of coping with that problem.

 

Question (Kris) – is there really a space for viral marketing from mobile operators?  Answer – maybe not, Facebook does this already.  Maybe the better way to do it is to use Facebook or Twitter as a channel.

 

Kris asked whether attendees knew whether their own companies had a presence on Twitter – few knew.  Kris’s claim was that companies were not using social media effectively to communicate.  It was being used with consumers, but not others.

Pervasive ICT Social Aware Services Enablers (Claudio Venezia, Telecom Italia)

The Web is “going social”, which contrasts to the walled garden approach used by mobile operators in the past.  Platforms are open and there is federation among social networkers.  (Comment – really?  I wasn’t aware that Twitter and Facebook were anxious to share.  Each has APIs and you can build to them but each maintains control, as do all the others.)

 

What is context – attributes of an individual that define situation:

  • Location
  • Time
  • Place (including the semantic implications attached to place, like “being at home”).

 

Context is supplied by applications and sensors.  Smart Objects use contextual information to decide how to customize services (example – a digital picture frame that decides what to show based on who is near).

 

Their context aware platform provides a point for integrating context information from multiple sources and brokering context between devices and applications.  The platform has a context cache to remember short term information that might be useful, and a history database that maintains old context data for use in trying to learn user behavior and predict aspects of the users context from limited sensor information (e.g. when the user leaves the office in the afternoon he or she will probably next be “at home”)

 

He went through the interfaces to the broker and basic functions.  A big issue is how to represent context.  He proposed “ContextML”, an XML based language to represent elements of context.  Context data is segmented in to scopes, associated with who provides it.  They didn’t try to do any comprehensive ontology of context information.

 

Claudio went quickly through some of the W3C and other standardization efforts related to this area. 

 

Question (me) – could you use this technology to hold ICIN virtually?   Kris felt no,  Mathais echoed that – technology won’t eliminate the need to meet in person.  Tim Stevens pointed to the poor quality of audio conferencing and how it improves after you meet face to face.  Claudio felt that the you wont replace the social interactions.  (Comment – in fact I wrote a paper 10 years ago proposing a way to hold a virtual conference by using technology to assist users in having the kinds of interactions they get from attending a conference in person. (see http://home.comcast.net/~wamontgomery/communications/conference.doc)   It’s not perfect, but it would be far better than the one-to-many structure so commonly used for “Webinars” and teleconference meetings.   With today’s technology you could do better. I do think this is a service that would be used, but I suspect there’s little motivation for it because users like travelling to attend a conference in person and conference organizers know they won’t get a significant fee from some attending an internet mediated conference.)

 

Dave Ludlam came back talking about the keynote given by video and that with a bit more technical help that connection could have been two way allowing the head of standards to participate in the discussion on why next generation internet efforts in different countries are not being coordinated.

 

Another comment (not sure who) – the average age in the room is at least 45.  None of us grew up with this technology.  His children have international friends that they consider close that they have never met face to face.  The point here is that the use of technology to communicate socially may be a better match to the next generation.

 

Tim Stevens talked about their experience in testing children in their interactive game environment.  Children get the communication much more naturally

Session 5A – X as a Service (Roberto Minerva)

This session was devoted to cloud computing in various forms – offering infrastructure or services virtually over a network.

The Network Aspect of Infrastructure as a Service (Karsten Oberle, Alcatel-Lucent)

This work I believe is part of a multi-country European research project.

 

He started talking about the importance of performance in determining how acceptable services are to the user, with some examples of Google and Amazon.  He cited a study showing that a slight deviation of performance in Google search would drive customers to alternative providers.  He had a histogram of Amazon response performance showing mostly acceptable performance with a few delay spikes caused by failures of machines.  Looking from Europe and using servers in the US, he showed there’s 50ms in variability of response from the http servers, and 100-300ms of variability in how fast the individual servers process the request.

 

He showed a set of curves showing performance and cost as resources were added in a network environment demonstrating there’s a “bathtub” shaped optimum for cost, where it performs well enough to keep customers while keeping cost under control.

 

He went through various models for cloud computing.  There are a lot of APIs being published and a lot of models for how the application requesting service specifies its needs and asks to scale performance.  He showed an exponential growth in APIs with several possible futures:

  • A single API wins quickly and sets an industry standard
  • International standards act a bit slower to set a standard.
  • No single standard is agreed on and the number of APIs grows larger for a long time, limiting application portability.

 

He went through a scheme to describe the needs and annote the description of the application needs with Quality of Service, then how it would actually be mapped into assignment to particular resources in the network which would meet the requirements.  The network mapping included the possibility of using private facilities in addition to the public internet. 

 

He went through the process of distributed resource management that controls the infrastructure to meet the quality of service needs

 

(Comment – it might have been the short night last night, but like many of the other cloud computing presentations I have seen I find this a bit too abstracted to get a firm grasp of how I might actually use this if I had a large application to deploy.)

 

Platform as a Service for Business Customers (Philippe Offermann, Deutsche Telekom)

 

He presented a reference architecture for platform as a service – the ability to deliver a virtual machine from the cloud as a host to an application  (Comment – again, personally I find this very tough going trying to translate the abstract descriptions into what concrete pieces of an application they represent.)

 

He went through a lot of information on structuring the application, then gave as an example a system that would register children for school.  This application has been rolled out by DT to a few municipalities.  The real benefit to the cities (or other end users) is the ability to rent everything on an as needed.  For this to work, the application and processes must be similar enough to others that “renting” the infrastructure makes economic sense and doesn’t require unique capabilities that will not be used by others.

 

Designing an application this way can require substantial effort because of the need to thoroughly understand the application.  He stated the potential to re-use a lot of this for similar processes (e.g. college registration vs kindergarten registration.)

 

(Comment, as the spouse of an academic administrator working at an institution that recently adopted a “standard” platform for student administration I have heard about many problems with this model.  The trouble is that while the overall processes of managing student life are similar, there are enough differences in the specific policies of any individual institution (e.g. when can an instructor change a grade) that it is very difficult to design a generic solution flexible enough to be adaptable to all the specific variations, and failure to adapt creates major challenges for all involved.)

 

His conclusion was that offering the platform for applications like this is a major business opportunity for carriers who have the expertise in networks and in managing large systems.

 

Question:  Why do you think this is an opportunity for carriers specifically.  (His question I think was more “what makes you think that DT can understand the business of registering children for kindergarten)  Answer – because reliability and performance are key needs that aren’t met by the public internet, and DT has the resources to provide them and the expertise to do it right.  Followups asked whether there was some other way of doing this such that there would be strong competition for it. 

 

In the process of addressing the question there was a lot of discussion of targeting a particular size of business – large enough to have complex needs but not large enough to manage the whole job of implementing it themselves.  (Comment – about this time alarm bells go off in my head.  Roughly 30 years ago the old AT&T was trying to find a way in its unregulated business unit to enter the computing and data communication business and put together what was essentially a “cloud computing” offering – applications that could be rented by small businesses with big needs and without the ability to do it themselves.  There were a lot of problems with the way the AT&T effort was managed, and in the end it failed.  I was told in a post-mortem review that one of the root causes of the failure was that to have an infrastructure powerful enough to meet a wide variety of needs it was expensive, with the result that the fees for the users were substantial, and this essentially squeezed out the market – any organization big enough to pay for the “cloud” solution was big enough to buy dedicated resources and manage it  themselves more cost effectively.  We may have learned enough in 30 years not to make all the same mistakes, but I am nervous that so many people discussing “X as a service” offerings have only a fuzzy notion of who would be a good customer for that service rather than a more precise notion of what the market is that would give good boundaries on acceptable cost and performance to cause the market to actually buy it.)

Clostera: Cloud Storage Enabler for Richmedia Applications Jerome Bidet (Orange Labs R&D, France);

This work has been jointly with other companies.  Orange does R&D all over the world.  Orange has a storage service with about 600K users around the world, and manages  over 35 million files.   He went through characteristics of user storage devices and on-line storage (user storage is fast and cheap but not reliable, while on-online is reliable and accessible from everywhere but expensive and subject to quotas).  What they do is a federation that provides a single interface to both local and network storage through the same APIs transparently to the user.

 

Their architecture has 3 levels:

  • Local user devices
  • “Point of Presence” level storage which are close to the end-user (e.g. in the equivalent of a DSLAM)
  • Data center storage, which can be everywhere.

 

The components of the solution include an access point where the user access the user (one point for all with the API exposed.  The storage provider manages the actual storage.  There are also managers for content, system nodes, and users.  The last 3 functions are in the data centers.

 

How does Clostera compare to other solutions?

  • It includes user and edge devices, unlike SAN and other centralized systems
  • It provides ease of management (unlike P2P systems.
  • It provides automatic and adaptive allocation and replication

 

This is a prototype that now operates only on a friendly user basis.  Some of the extensions proposed include the ability to traverse NATs, handling sharing, and being able to support access while offline.  (Comment – the current prototype is clearly limited, I’m not sure how to reconcile this with some of the earlier statements about the scope.)

 

During the question period interoperability was a common question and while the speaker talked about being able to interwork with other network storage architectures, one thing that became clear was that as we design “cloud” based services we run the risk of winding up with differing solutions that aren’t easily mapped, and creating a fragmented market, much as what happened in the early days of IN (e.g. solutions for Lucent IN couldn’t immediately be ported to Telcordia, Nortel, Alcatel, or other vendor equipment without substantial effort.)

Dhatri: a Pervasive Cloud Initiative for Primary Healthcare Services Subrahmanya Venkata Radha Krishna G Rao (Cognizant Technology Solutions, India)

The presentation covered the cloud as an implementation as well as the specifics of  an application supporting healthcare. In rural India. The healthcare application was aimed at providing care to those outside of major metropolitan areas who were underserved, and even to others.  (The idea seems to be to offer healthcare as a rentable service – use communication to eliminate a lot of wasted time so that the user pays only for the small amount of time for the doctor to view the results of tests and examinations performed remotely, presumably allowing the doctor to do more and charge less per transaction).

 

The system used simple mobile technologies (SMS, GPRS) to collect data from patients, manage it in a cloud based computing information, and deliver it to doctors (via mobile phone), then take the result from the doctor.  It’s based on the Amazon could offering (Linux based).

 

He showed a promotional video on the end service, in which a woman recorded symptoms and was put in contact with a doctor.  (Comment – It’s a noble goal, but I wonder whether the technology here is too lightweight to give the doctor enough information for a good diagnosis.

 

Question:  Is this really a cloud application or just a network application.  Answer – not in these examples, but he can view cases where analyzing the test results requires massive computing and as a result benefits from the availability of on demand resources via the cloud.

 

Session 6A Content Services (Osamu Mizuno chair)

Targeted Advertising for the Communication Service Provider Using the Intelligence in the Environment Arnab Dey (Wipro Technologies, India)

There is a lot of controversy over targeted advertising (i.e. is it invasive of privacy, is it effective, etc.)  Nevertheless it is expected to grow over the next few years.

 

From the customer perspective viewing advertising there is concern about SPAM, the lack of the ability to opt out of getting it and privacy issues.  From the network provider perspective it might be a waste of network resources.  Targeted advertising has the potential to improve the situation because it increases the effectiveness for the user but also the acceptability to the customer who sees advertising and promotions for things he or she actually wants.

 

They did a survey of customers on how they felt about some aspects of targeted mobile advertising and found that there was very high resistance to advertising over the mobile device but with incentives, like coupons and a way of turning it off (opt out), it would be acceptable to about half the audience.  The asked about channels people would accept ads on, and SMS and email were far more acceptable than IVR or voice messages.

 

One of the things they found was that users were much more receptive to getting ads on a “pull” basis – where they might get notified that some offer was available but it would only be delivered when they ask for it.  3 trigger models:

 

  • Internet search – observe what the users search for to decide what ads they might be interested in.
  • Content Delivery – piggyback advertisements on subscription based content services (e.g. joke of the day)
  • Network Events – deliver an ad when some event happens (like when you call a business and it’s busy). 

 

(Comment – the user acceptance still seems critical and very fragile.  I can imagine a lot of potential abuse, like giving you an ad for a competitor when you call a company and don’t get through)

 

He showed an architecture which included deep packet inspection to identify what the user was doing (i.e. gather information from searches), and take other events from the network to go to a recommendation engine which would produce a recommendation for an ad and a delivery channel for the user.  That could be fed back into the packet processing layer to allow ads to be inserted (piggyback) on delivered information.

 

Targeted content based advertising is a new field, not as mature as on-line information, but has the potential for much better targeting because the provider knows much more about the customer than is the case for the typical on-line advertiser.

 

Question (Chet McQuaide) – Amazon does a lot of targeted advertising now, how does what you are talking about compare with what they do?  Answer – the speaker sees new opportunities related to content services. 

 

Question – what kinds of channels does this work with and is the ad really integrated with the content.  Answer – it could use multiple channels (TV, Mobile, broadband, SMS, etc.).  The ad could be delivered with the content (on the sides of a web search, after an SMS, etc., or could be sent independently.  Ads that are logged for delivery rather than pushed to the user would be delivered independently on request.

 

Interactive Visual Content Sharing and Telestration: A Novel Network Multimedia Service Qiru Zhou (Alcatel-Lucent, USA)

Telestration is the animation done by commentators for sporting events to overlay video and highlight things to draw attention to particular pieces of the image.  (It also is done for weather, news, and other issues.  Today it uses dedicated equipment for drawing and video mixing that must be physically close to the user and as a result does not support a distributed set of users drawing on the same video.

 

They feel there may be a big market for being able to provide telestration over the network to a distributed community.  They took as their requirements:

  • Distribution (multiple users physically separated)
  • Heterogenous devices (specifically PCs and mobile device screens)
  • A variety of input formats (mouse, touchscreen, etc.)
  • Coordination of a group of users so they don’t interfere

 

Their implementation composes the diagram in a server, then mixes that into the video at the client.  This allows them to ignore differences in screen format and resolution (or more accurately push handling all that into the client.)

 

Applications:  Telemedicine, remote collaboration, education, first aid response, gaming, social networking.

 

They have a lightweight implementation of drawing that could be done strictly in a browser (low overhead).  The service does need two way communication for telestration, one way video delivery, and the support of overlay layering in the end device (He said that most PCs and set top boxes will support overlay layering sufficient to support the service, but didn’t talk about mobile devices).  The service also needs the ability to synchronize video streams to different devices in order to allow the telestrated drawings to vary over time and refer to specific parts of the video consistently in all terminals.

 

He showed a couple of examples, one of a group of friends watching snowboarding in which one of them was illustrating what was going on.  In another  a group of half a dozen doctors were watching an operation from remote locations, and being able to see the operation.  Any of them can annotate the screen to show things to all including the surgeon.

 

He talked a little about extensions to the work that will be capable of tracking objects as they move in the video and attaching telestration to objects.

 

Question:  Does this tie in to the synchronization presentation by Hans Stokking yesterday?  (Yes, synchronization is needed).  How do you resolve the problem that any delay in the channel causes twice that delay in others seeing the result of your telestration?  (I didn’t complete understand the answer.  It seems like they have a way of identifying when to apply the telestration updates which they use in the terminal, but not full synchronization of the underlying video.)

Another question related to object tracking, and whether it would be useful in targeting advertising. The answer is potentially yes.

 

How Can Recommendations Be Presented to TV Viewers? Concept Testing with Potential Users Sue Hessey (BT, UK); Ian Matthews (BT, UK)

The purpose of doing this is to help customers deal with the problem of too many choices.  (Comment – during one of our lunches there was a discussion of the difference in meal formats here to Bordeaux.  The meals at this ICIN were buffets, with a lot of dishes on the buffet, while in the past most meals were served with no choice.  We all faced the same problem – way too many choices and the inability to sample everything.)

 

In an online world, users have gotten used to making their own choices by doing searches, looking at recommendations from others attached to lists of choices and making their own selections.  On the TV there are many problems:

  • Users aren’t looking to make choices – they don’t want to interact.
  • The typical remote is a lousy input device.  You can’t sort through a large number of choices using that (Comment – she was even more limited in how she saw the user using the remote, only the four “arrow” keys, not explicit channel or option selection.  This probably does represent the behavior of most people)

 

There are different ways of doing recommendations:

  • Profile based
  • Affinity (like Amazon “people who watched X liked Y”)
  • Based on recommendations from friends.
  • Ratings (Like TiVo suggestions).

 

Her work was more on the user interface to present the recommendations  than the underlying technology or the scheme to generate them.  Their system did not actually present video – it was a disconnected prototype that focused on judging response to how items were presented, not what was presented or anything about the user actually watching the recommended video.

 

They have a trial of this in progress.  They had 2 user interface designers come up with designs for different ways of delivering the recommendations and ways for the user to select and navigate.  There was limited information available on how to use the system to simulate what was likely in the real world.

 

The had four designs:

·         A full program guide

·         blocks of times with recommendations on top,

·         list of things down the right side, a widget overlay,

·         an odd “wheel” design (read the paper, it’s impossible to describe apparently they put it there just to judge response to an unusual design)

 

This was a qualitative trail where they got a lot of responses both individually and in focus groups.  They had 17 people in 5 groups, including both BT people and non-company people.  There was a variety of experience with program guides and recommendation systems.

 

They gave people two scenarios, one when you just turned the TV on, and one when you finished a program and wanted something else.  They used a “Grounded Theory” process to analyze this which produces a good concise summary.  Statements are recorded, coded in categories, clustered, The summary presented showed the groups in type font size related to importance (number of comments).

 

Conclusions:

  • Users really want the full program guide, independent of what kind of recommendations they are getting.  They don’t want to miss something.
  • Users can navigate using horizontal and vertical controls – users actually LIKED the “Wheel”, which they didn’t expect them to, and drew analogies to Apples interface.  It was “Cool” and that was important.
  • Users found it hard to pick out which item had the highest recommendation (top, middle, biggest size?  Etc.)
  • Thumbnails (especially moving) are useful as long as text can be seen too.  Users want to preview a choice before committing.
  • The ideal number of recommendations is between 5 and 7.

 

She presented their proposed interface which gave the recommendations down the left, the most important on top, and the program guide filling up the rest of the display.

 

Some more things:  Recommendations to multi-person households is a challenge.  Social recommendations are needed, TV is relaxation, and “don’t make it too complicated”.

 

Question – How does this work with a mobile phone device (small screen)?  They are interested in understanding this.  Maybe look at it as a second screen to help localize who is selecting content for the big screen.

 

Question – was the interface interactive?  No, just Powerpoint for now.  That’s a next step.

 

Question (Chet McQuaide).  Observation – his IPTV offers a “Favorites” category, but it’s based on channels, not subjects and that’s wrong.  He refined the question about using a personal device as a remote, which is what they are looking at

 

Question (me) – Did the users think there was a cost to watch a selection and does that matter?  Users insisted that if they get a recommendation it’s based on their views, not what the operator has to push, and the system has to make clear what the costs are, then the users can handle it.

 

Session 7B: Future Ideas Chair: Dan Fahrman (Ericsson, Sweden)

Dan is a past chair of ICIN.  He introduced this session as where the application weaving and transformational challenge is really addressed.

 

Analysis of Design Patterns for Composite Telco Services Corrado Moiso (Telecom Italia, Italy)

He began with a description of needs for controlling and monitoring Telco services, specifically lots of dynamically created and synchronized tasks, long running transactions, and handling of asynchronous events.  He showed a reference model that consisted of a composite services having multiple component services (which communicate asynchronously), each of which deals with the devices and users.  He then went through various patterns that apply (Comment – the use of design patterns was a major thread of computer software architecture and design maybe 15 years ago and still has strong support.  It basically identifies patterns for how pieces of a design interact that can be reused.)

 

He talked about client/server, publish/subscribe, and a couple of other basic patterns, then some more complex patterns including:

 

  • Application started transaction, which shows how an application can create a transaction that takes multiple message exchanges.
  • Solicited transaction – transactions undertaken in response to the composite service by a component.

 

He showed examples of these patterns in four different interfaces (Parlay, Parlay X, Ribbit, and one other).He showed a specific example of Parlay and Ribbit showing the differences in how the same pattern is implemented.

 

Patterns provide a language independent way of specifying repeatable ways in which components interact to accomplish something.  They are language independent, but are often mapped into BPEL or Java for implementation. 

 

He went through a discussion of a parental control service and described the underlying patterns for it’s implementation, then showed the mapping of that description into BPEL.  The result apparently doesn’t quite work.  (The reasons are fairly technical and I don’t think the 15 minute presentation was enough to really explain what was going on.)

 

Some lessons:

  • Efficient implementations require relatively complex design patterns.  The complexity isn’t due to the specific protocols, but due to the complexity of the task being performed.  (Some examples included the use of notification instead of polling).
  • The definition of APIs should start with a description of the pattern of use, not the specifics of the interface.  The pattern will define what things are really needed.
  • His final lesson had to do with extending the interfaces of some APIs to allow a Session ID to be incorporated, which I believe relates to being able to use interfaces asynchronously, but there wasn’t enough information presented in 15 minutes for me to be sure.

 

Towards Sustained Multi Media Experience In The Future Mobile Internet Albert Rafetseder (University of Vienna, Austria)

 

He says in Austria Mobile internet is cheap, and users often use it causally (e.g. killing time while waiting for the train.)  Today, there is a lot of bandwidth at the radio interfaces, but there is a lot of inflexibility in the access structure and as it gets into the public internet.  The concern is that as we get “hypergiant” content providers like YouTube they set up lots of connection points to insure that they are physically close to the customer requesting a download.  Unfortunately, the mobile network tends to pick limited routes and fix them for the duration of the connection of the endpoint, which means they do not find the best route to these peering points.

 

He talked about various ways of handling this, including inspecting packet flows to determine best routing and changing the routing based on that.  This works, but there are no guarantees because the internet doesn’t have any QoS guarantees.  The internet continues to work primarily due to overprovisioning (more capacity than needed).

 

Another idea he presented was to take advantage of the nested characteristic of video coders (including H.264).  In this view the core, which has to be delivered is sent in one flow with high QoS and low delay, while “enhancement” information is set in separate “best effort” flows.  (Comment – déjà vu.  In the early 1980s I was involved in the design of a packet network for voice using ADLPC coding which had the same property, and we handled overload by grouping the bits in the voice samples so all the most significant bits were in packets with the highest priority, while lower order bits were sent with lower priority and got dropped in overload.  The result was smooth degredation, with the ability to handle substantial voice overloads without losing intelligibility of the voice.)

 

He talked about federated architectures in which providers interconnect networks with routers with different characteristics, and basically trade resources to handle whatever the application needs (i.e. if it’s low delay maybe it routes through a partner network that has focused on that while your network takes their bursty delay tolerant traffic for them.

 

He talked about other techniques to cooperatively manage the mobile internet that could improve performance to the end user.

 

They did a prototype implementation of rerouting a video flow on the fly.  One conclusion was that a lot of the overload was not related to communication but to the implementation of the virtual machine on which the algorithms ran.

Enabling Context-Sensitive Communication Experiences Sebastian Lampe (Fraunhofer FOKUS, Germany)

The presentation was made bi Irina Boldea.  Users want to be able to access their services any time, from anywhere, and from any device.  The challenge is doing this in a world where the control of the telecommunication companies is decreasing over time because “over the top” players like internet providers have slowly captured much of the responsibility for providing services. 

 

For example, Google now provides voice, mail, mailboxes, messaging, etc. 

 

In their view the key elements to providing a converged service are converged messaging and context.  She showed an interworking of services to devices that basically allowed a service designed for one endpoint to be delivered elsewhere.  She talked a bit about standardization of converged messaging and the fact that it’s a very complex service.

 

Context was presented like a profile – lots of characteristics of the user including some attributes that are policies that control how the user wants to receive services.  There were also things that were implied about the user (like buying interests) rather than being set explicitly. 

 

A 3rd element needed is “Personalized messaging”, which makes use of the other two. 

She showed how this could be done using the policy deployment and enforcement mechanism of IMS to describe personalizations as policies and have them carried out by the converged messaging service as needed.  She gave a detailed example (not really readable at the scale of the presentation).

 

All of this was prototyped on the FOKUS Open SOA Telco playground, which provides models of the IMS components.  The implemented the message personalization and converged messaging pieces according to OMA standards or pre-standards.  One of the next steps will be to extend to LTE using the OpenEPC toolkit that they have.

 

The Inner Circle: How to Exploit Autonomic Overlays of Virtual Resources for Service Ecosystems Roberto Minerva (Telecom Italia, Italy)

Roberto is the technical chair for the next ICIN and has been a long time contributor to ICIN.  This is joint work with a university in Paris where according to the introduction Roberto is studying for a Ph.D.

 

He stated a key problem as being how to virtualize a network while still maintaining quality of service guarantees (what one of the earlier talks showed).  The future is totally distributed and maybe programmable.

 

In his view the internet is dramatically changing – it’s becoming asymmetric.  Large companies like Google, Microsoft, Amazon, etc essentially represent the internet to the user (i.e. if you aren’t on Google you don’t exist).  This is a big departure from the open days of the past where anyone who hooks up becomes instantly visible to all.

 

Cloud Computing:  The trouble is that each cloud is a private universe – you can’t mix and match components from multiples.  Your choice of a cloud locks you in.

 

Load is driving companies to identify the few customers consuming lots of bandwidth and throttle them back a bit.  This violates the myth that the internet is totally transparent.

 

Most telcos have decided they don’t like peer to peer networking, but in fact they should embrace it – it’s scalable and robust, at the expense of more explicitly dealing with the challenges of distribution.

 

His problem statement – rethink some aspects of network architecture:

  • Relationship between application and network (maybe not completely independent.
  • Allow virtualization
  • Follow end-to-end principle

 

Enabling technologies:

  • Peer-to-Peer
  • Autonomic networking
  • Virtualization
  • Grid computing
  • Sensors
  • Ad hoc networking.

 

He talked about moving from a fully Autonomous network to cooperating networks – there will be multiple networks, they must cooperate.

 

A Network of Networks  -- networks cooperate to deliver services.

 

He proposed a structure from peer-to-peer that relies on “special” nodes in the distributed configuration to be responsible for understanding the capabilities of the rest enable resource sharing.  These become the inner circle that are responsible for making the network work.  He presented a complete architecture around this concept that achieves high availability with unreliable nodes, and said that this is a structure which can be built in a “Network of Networks  This is a way in which the Telco community could take the lead in defining the next generation.

 

(Comment – Roberto covered a huge number of concepts that were complex and unfamiliar to many if not most in a short period of time.  I feel I got a bit lost in the details.)

 

The panel was all asked to comment on how they would define policies.  Roberto talked about how the “grid computing” world was more complex than client server.  We have languages to define policies and programming, but there’s more work to be done.  Irina talked about about some specifics they used in their prototype.  It’s an XML language focused on flexibility, not necessarily performance. 

 

Question (Ericsson person)  Roberto suggested that one operator could provide a virtual IMS in another countries.  He suggested that this falls within the IMS standard and effectively means that the serving company is essentially supplying the IMS software on equipment in the served country (i.e. you are a virtual network operator)  Is this true?  I think he agreed, more or less.  He cited another example of Apple as a VNO, using multiple carrier networks to serve their clients.

 

Question: Max Michel – Nice topics for next year’s conference, but he wants to know how we get people to pay for the future.  Current model of the internet will not scale up, but adding all the complexity of telephony with tariffs and exchanges on everything will stifle innovation.  (I didn’t understand the answer Roberto gave and how it related).  One of the other speakers suggested a market in communications technologies and futures as perhaps the foundation of how you put networks together. 

Posters and Demonstrations (Chair Warren Montgomery)

ICIN has had a “poster” session for several years.  It has in the past been used to accommodate presentations that didn’t quite make the grade for live presentation but seemed interesting or came from interesting sources (i.e. small companies and universities the conference wasn’t otherwise getting input from).  This year we did something different, partly based on feedback I gave after last year’s conference.  We made an explicit call for submissions to this session with the enticement that we would allow presenters to give short demonstrations.  We also allowed the poster and demonstration presenters to submit full papers and having them included in the proceedings of the conference. We got many good submissions, about half of which were demonstrations.  As the chair of the session I can tell you it takes work logistically to support this given the number of presentations and their special requirements.  Fortunately I had excellent assistance in doing so.

 

Also new for this year was some dedicated time for viewing the presentations at an evening reception sponsored by Nokia/Siemens Networks.  That proved very successful as it gave participants an extended period to interact with the presenters.

 

A Platform Providing Bidirectional Service Integration for the Dynamic Long-Tail Service Market Lajos Lange (Fraunhofer Institut FOKUS, Germany)

This presentation really focused on a mechanism for efficiently building services to allow service developers to build services aimed at niche applications.  A service broker from Fraunhofer FOKUS that allows services to be built from a wide variety of service elements (e.g. popular internet services in addition to session control, and exposed to applications through a variety of APIs following SOA and Web2.0 principles.  The broker is designed to be “industrial strength” – high performance and scaleable.

Evolving the Service Creation Environment Stephen Hall (Nokia Siemens Networks, South Africa)

This poster presentation covered various factors in what determines needs for service creation specifically in carrier networks and how to apply Web2.0 technologies to help update the service creation environment to meet these needs.

Moving the Entire Service Logic to the Network to Facilitate IMS-Based Services Introduction Youssef Chadli (Orange Labs, France)

This was an interesting and somewhat unexpected presentation, showing the problems an operator faces in specifying and integrating new terminal types with a lot of services.  Specifically protocols and clients in the terminal are often incompatible and adaptations need to be made.  What the authors propose is using various capabilities for download to essentially download a compatible client onto a new terminal to allow it to be rapidly introduced.

Prototyping Mobile Broadband Applications with the Open Evolved Packet Core Marius Corici (Fraunhofer FOKUS, Germany)

The Evolved Packet Core is a new set of service mechanisms to work in an all IP communications world on top of IMS, the public internet, or other access and transport networks convertible to IP.  It provides capabilities for QoS and multimedia communication to support a variety of applications, but also has some new interfaces the applications must use to get those capabilities.  What this presentation and simple demonstration illustrated was a simulation toolkit that would allow new applications to be built and integrated.

Service Composition For End-Users: a Tool for Telephony Services Mazen Shiaa (Gintel, Norway)

This poster and demonstration was aimed at allowing real end users to create their own services by composing service elements.  They provide a graphical environment to build services with simple elements and allow the user to put those elements together, test them, and deploy the service.  Their current prototype works with an Android smartphone to provide the service (i.e. the logic is downloaded into the phone which provides call handling), but the goal is to also be able to deliver the same logic to network based call control elements.

 

Blending the Telecommunication Domain with Web 2.0 Services Konstantinos Vandikas (Ericsson, Germany)

 

This presentation and demonstration showed how to take services from a variety of sources and combine them with a simple graphical environment built in Java.  It could then deliver services over a variety of different access media (internet, IMS, circuit, etc.)  They had a multimedia presentation illustrating the application and a short demonstration using SIP and a simulated IMS to show a service that would handle incoming calls noticing the called parties Google calendar, and if the person wasn’t available make suitable notations for later followup.

The Family Portal: Combining IMS and the Web van Thuan Do (Linus, Norway)

This poster was focused on a service that they are exploring to use technology to provide ways to enhance family and extended family communication.  The service provides a family home for communication that allows secure access from anywhere and allows family members to share media, leave messages, and communicate often.

Closing session (Max Michel)

Max started with a view of what we thought in 2009.  ICIN community was mainly interested in architecture and control and the impact of the outside on the network.  There was a lot of interest in alternative networks (grids, sensors, etc.)

 

ICIN is unique in using peer reviewed papers.  (Comment – there was some discussion of this in the TPC meetings.  I don’t know of any other professional forum where every paper is reviewed by 10-15 experts in the field before publication.  The review process is as much about stimulation of interests as well as review.)

 

Some themes:

  • Content – lots of contributions
  • Sustainable business models (but not much on Net Neutrality Comment – I think we all assume it!)
  • Open network delivering XaaS
  • Sustainable growth and caring about people
  • Security and privacy
  • Communities (social networking)
  • Smart environments (internet of things and augmented smart reality.
  • Really working services (QoS, Quality of Experience)
  • Standards

 

We now had a discussion on the paper on de’perimeterization, which grew out of the attempt to summarize ICIN 2009 by a sub committee of the ICIN TPC (Comment – I had only a minor role in this one)

 

Roberto – Network operator services are now strongly coupled to networks.  That limits markets  -- TI has Italy, Brazil, Argentina.  On the internet anyone with a domain (RobertoMinerva.com) can provide services everywhere (Comment – yes indeed, something I’ve been highlighting as a major problem because in addition to controlling revenue potential it limits the interest in others partnering with you if you can reach only a limited set of customers)  If Telco wants to compete with the web, we have to figure out how to get around that and the answer is to more loosely couple networks and services to allow use from everywhere. (Comment – that’s a very powerful statement of the situation and what has to be done)

 

Roberto also talked about the speed of standardization – way too slow to meet this kind of need.  One problem is we think not about services but about infrastructure and cost effectiveness of infrastructure.  Strong statement:  Don’t deploy next generation infrastructure until we have a good idea of what services we want to deliver and know we can do it in an open way. 

 

Telco deploys Standard, but closed systems (we follow standards, but you can’t talk to our internals.)

 

Google, Facebook, etc., deploy non-standard but totally open systems.  Each has an API different from anyone else, but everyone is free to access it.

 

Comment: (Anders Lundquist)  Strong statement.  He talked about the app store problem.  Once the application goes to someone else who isn’t tied to any particular operator, then the user can take his business anywhere that is cheap.

 

Stuart:  RCS (Rich Communication Services) when it’s done will simply duplicate what Skype could do 3 years ago.  We no longer lead in the area of services.  He felt the segmentation by region was really a problem of regulation, which has shaped the way we compete or don’t compete.

 

Comment (ALU person)  One thing that goes wrong is we assume we have to build the services ourselves.  eBay has 8 billion API calls in the last month – most of their revenue (assuming these calls generate revenue) comes from other people using their capabilities, not their own services.  (She advocated opening APIs and charging for them or establishing other compensation.)

 

Roberto – his company recently rolled out a strategy:  “We need to create a lot of new services”.  This was driven by the perception that they were going to lose voice revenue and to make it up in services would be challenging.  He shares the view that you need ecosystems working on your platforms to build all those services.  That’s a big attitude shift for most.

 

Chet McQuaide:  he observed that part of de’perimeeterization involves sharing context.  Context from selecting videos would be useful to amazon (or netflicks?) while we would be interested in the profiles of those companies.  Everything is shareable.  (Comment – well maybe not.  There are both legal issues and customer resistance issues in sharing profile information)

 

Comment – (Danish TV person) – we keep mixing revenues from regional access with other things.  Access is still a good business and it is regional.  Services are more competitive and that’s where the threats are.  (Comment – interesting, I wonder if the division in TV is a bit firmer than in telephony)

 

Roberto:  Competitors are doing a good job, they are satisfying customers.  We should look at some things as lost and get into new things where we can compete.  OneAPI is too late – waste of money.  We should be thinking about machine-to-machine API, augmented reality, and other areas where we can ad value. 

 

T-Labs person:  His view is biggest competitor of operator A is operator B, not Google, Facebook, etc.  If operator A holds back on introducing LTE, operator B can do it and get all the business. 

 

(Comment – The real problem here in my view is that there are multiple businesses here:  Access and services.  Access has competition on price, not services, and very few players because investment required is large.  The service business has been deperimeterized and is very competitive because the cost of entry is near zero.  Operators need to become better bit pipe providers and recognize that services are something it will be hard to compete in, but they don’t want to be in that future.)

 

Rebecca Copeland:  In some parts of Africa they have borderless service – you cross a national and service provider border but use services in your home network seamlessly.  (Comment – didn’t we address this with HLR/VLR and WIN 20 years ago?)

 

Max’s take was basically similar – it’s a business problem, not a technology problem that causes us to make a big deal out of roaming.

 

Session summaries: 

 

  • 1A – IMS is moving from hype to reality – not what we thought 6 years ago but it’s working, it’s getting introduced and lots of problems are being solved.  PSTN will eventually stop, but today it’s behind everyone’s network.  We can’t ignore it.
  • 1B:  App stores – not a big revenue source but a leverage point. 
  • 2A:  Content delivery.  Content is driving a lot of economics of the internet and disrupting business models because of the strain.  We might be able to help here by enhancing the network to deal with it.
  • 2B Security/Privacy.  Lots of personal data is getting leaked – this has to be fixed.  Users must be educated to privacy risks and policies.  User authentication is a big deal, biometrics can help, first as second factor in ordinary means (password), but eventually as first factor.
  • 3A Home networking.  Common place, and risky.  Digital content is everywhere.  Re-using IMS functions may help.
  • 3B Context, lots of context in the network there are standards, context is very important to delivering services.
  • 4A service composition – back to basics.  Lots of enthusiasm around it.  Interest in the topic is high (half the posters dealt with it.  It looks like practical tools are emerging.  (Unfair, like comparing an engine from 1954 with 2010)
  • 4B Social networking.  It’s increasingly important, operators can enrich it or exploit it (viral markeing
  • 5A XaaS.  Lots of interest in virtualized system.  Operators have a natural value to add.
  • 5B  EPC is the next generation of access anchoring to services.  It has a key requirement for multi-access portability and device shifting.  Service gateway function hasn’t been considered thoroughly
  • 6A  Content Services.  Not just to entertain but to market.  Social interaction is important.  User interface is important (Comment – this was a varied topics session)
  • 6B Sensor Networks.  There were a lot of exploration of key aspects papers presented, but this is a new and growing area.
  • 7A Business Models.  New world requires new thinking (new models)  Consider internet as a model (usage vs subscription pricing.
  • 7B Future.  Border between internet and telecom is disappearing, there’s a lot of work to be done.  A bright future lies ahead in weaving applications into the network.
  • Posters/demonstrations – 8 presentations, lots of interest. Will probably keep the format and expand.

 

Interesting discussions:

The networking reception Monday Evening was interesting, with short remarks by the Senator for Economics, Technology and Women's Issues and Mayor of Berlin (a Mayor of Berlin but not the Mayor of Berlin) (Comment – I’m not sure I got that exactly right, his title was given variously at various times and I admit I don’t understand the government structure here)  The key message is that Berlin has been working hard to establish itself as the technology center for northern Europe, with some success in attracting several large R&D companies.  There were about 50 people from the local Internet development community at the reception, though as an observation I think there wasn’t as much mixing between those folks and the ICIN attendees as had been hoped for.  I think that’s part of the challenge for communication carriers – our culture of how we socialize, where we go to conferences, and what expect to do there isn’t entirely compatible with the opinion leaders in new computing and communication applications.

 

During Lunch on Tuesday with Dave Ludlam, John OConnel, and an attendee from Telenor we had a discussion on how ICIN should evolve to stay relevant.  Some interesting observations:

  • We don’t have a unique focus, and that creates a real problem.  We used to be the place to go forIN, but now we cover many topics.  The title doesn’t suggest that this would be a good place to find out about application stores or cloud computing, yet we have work on that here.
  • The focus issue creates other problems.  It’s hard to get key people from vendors here because their customers in the carriers might not be.  We have people from BT, but maybe not the people doing Cloud Computing.  We might have cloud computing people from Telecom Italia, but not IMS, etc.  Vendors can’t afford to send the right people to cover everything.
  • Focusing has challenges – there are other conferences that are the focus.
  • Geography is a problem.  We don’t have people from Apple or Android here, the key people are 9 time zones away in California.  Getting  next generation applications people in the US to attend is a big challenge because of time zone, customer base (i.e. their key customers aren’t here), and focus.  A speaker from Google was once lined up and reputedly got fired for agreeing to come here.
  • There may be an alternative.  Sweden and the other Nordic countries are like California in fostering innovation.

 

The attendee from Norway spent some time talking about his world – users are very sophisticated.  Have a lot of services and expect a lot.  They will switch providers rapidly.  Nobody signs 2 year contracts.  It’s a very competitive environment. 

 

We talked a bit about VON as an older competitor to ICIN and how they got people.  One thing was fear and focus.  Everyone wanted to know what was coming with VOIP, and Pulver successfully associated VON firmly with “the place to be if you wanted to learn about VoIP).  (Comment – I think at the time this was easier, he was the leader, and the focus was initially all in the US, which made the logistics a bit easier)

 

In other discussions with Rebecca Copeland a couple of interesting things came out:

She made the observation that it is both cheaper and more convenient for her to use Vodafone  voice roaming on her iPhone than to use Skype, because prices for roaming mobile data are expensive.  The convenience factor is access to her normal address book and the fact that her normal phone number works without effort.  She also noted the problem of compatibility with internet providers.  Google and Skype don’t interwork (She said Sprint can connect them, but it takes effort every time one of their proprietary formats changes because it’s done by packet inspection and hacking).  She felt that addressing this would require standards on the order of IMS  Roberto – 30% of facebook users in Italy are mobile.  What’s relevant to the user?  Community.  User chooses a community where their friends are. 

 

In another conversation on vendor/customer relationships, Rebecca indicated that working with Huawei has carried an interesting challenge.  Chinese culture is apparently not to say no, so it is difficult to get a definitive answer on what a product cannot do or what interface won’t be developed.  We had a lot of discussion on this and realize it’s not always the case, but I hadn’t though before about the impact of culture on technical/business negotiations  (Comment – in my view the difficulty in saying no is a significant problem in developing feature rich software.  In the mid 1990s when I moved to Lucent’s IN business unit one of the first meetings I remember was a management meeting with our new director.  One of the long time managers overviewed recent results and proudly described how much revenue recent product sales to our best customer had brought in.  The manager asked one question:  “Does anyone know how much those sales cost and whether we made a profit on them?”  The answer of course was nobody did.  It pointed to a persistent problem in the IN business.  The platforms built by Lucent and others were supposed to be true standard products with multiple customers for the same elements, but our customers kept demanding unique features to support their unique applications, and both our developers and sales teams kept saying “yes”.  The result was instead we had a custom development business in which each product was customized for each customer, and our business was not set up to support this (too much overhead managing each unique combination)

 

Business Models for Mobile Platform (BMMP 10)

This was held in conjunction with ICIN, but is actually an independent conference which last year was held in Berlin in conjunction with a different meeting.  Last year they published the results as a book.  I only took good notes on the introductory keynotes.   This was due to my mental state after a long week and the difficulty in finding power for my laptop than anything in particular about the content, which was a thorough exploration of various aspects of platforms in mobile networks.

 

Intro – Pieter Ballon

Pieter is the organizer of this meeting and in his overview went through a lot of basic characteristics of platforms and how they appeared in the mobile internet world as an introduction to the rest of the program. 

Roberto Minerva – A Network Paradox brought on by Deperimeterization.

 

What Roberto basically presented was that while usage of our networks is rising rapidly, revenue is at best stable (and most often declining.)  This is not a sustainable solution, but it is in fact the success of mobile internet that is responsible for the stress on those who have been most successful in providing it.)

 

Over time the operators have reacted by trying to cut costs to survive.   Some create new services, but they haven’t brought in enough revenue to offset rising costs of providing infrastructure.  Regulators eventually have to make a decision – how to keep the operators alive in this environment.  Do you want the operators to be simple bit pipe providers or allow them to be something else?

 

To make this concrete he stated the problem of a 40 Billion Euro service provider.  If revenue drops only 1% every year, that’s a 400 million Euro gap to make up:

 

  • Do you find new markets for existing services sufficient to make it up?
  • Do you try to move up into applications.

 

Someone pointed out that margin was important here.  Indeed the new markets often have lower margin, which would mean to preserve your profit you have to grow revenue even more.

 

In entering new markets you may need to adjust business model (specifically he was looking at being a broker, with APIs empowering 3rd parties for the applications.)

 

Roberto’s slides had a whole series of business models, from “utility” at the base to “broker” at the top ranked by margin among other characteristics.  The clear message in these charts was that the operators to move towards the “richer” end of that spectrum where margins are larger.

 

Telcos face two problems in competing with “Webcos”.

  • Webcos give away the things that Telcos want to charge for because Webcos have different business models.
  • Webcos sell globally, Telcos sell locally.  Global means access to LOTS more customers.

 

Can Telcos enter new markets with a traditional ecosystem approach as has been used in the past to get applications?

  • For 20 years we have been trying to open APIs in carrier networks and establish ecosystems around them, but
  • APIs often hide the most powerful things that the application builders want most.
  • The local scope of the telco makes it hard to get partners who would rather be in ecosystems off more global reach.

 

What can Telcos bring to new markets?

  • IPTV is just another bundling of content – user doesn’t see much different from Cable or Satellite, so why buy? (or more accurately why pay a premium for it?)
  • To enter you need something better than yet another price based bundle – there has to be something the customer gets from you rather than a temporary good deal.

 

Is the network an asset?

  • What about the end-to-end principle (key to all IP), which makes it hard to add value in the network.
  • IMS, SDP, EPC, and other telco creations don’t get this. (i.e. they violate the end-to-end principle
  • The result is we are putting LOTS of money into these assets with no certainty of getting return for it.

 

He then went through a lot about APIs and what could be done with them. 

 

The bottom line from his talk was “Pick your role and figure out where you want to play.”

  • A bit pipe has lower costs but needs big pipes and big span. 
  • A service enabler needs a complex infrastructure on their transport. 
  • A service provider needs services.

 

Question:  20 years ago there was supposed to be consolidation.  Now we have more operators than ever.  Why?  Answer – there has been consolidation but it’s invisible to the customer.  Telefonica owns most of Telecom Italia.  (The real message here is that yes, carriers have consolidated and that can help margins, but as they have consolidated they have retained traditional brand identity.  A second take away was that regulators have not encouraged consolidation to the point where operators gain the power to raise prices (Comment – though they probably would encourage consolidation that allows operators to achieve economies of scale and lower costs).

 

(Comment – Roberto gave this talk with a subset of a large file of slides, and afterwards says he often speaks this way when asked to present on these kinds of topics.  There were many interesting charts he flipped through to find the ones he actually showed.  I believe he has a lot off insight in how the communication business has evolved and the forces behind that evolution.)

 

Rebecca Copeland – App Stores as a business.

Rebecca started by saying that she feels we are in the middle of a disruption introduced by the iPhone. 

 

Different players with different motivation in App Stores:

  • Handset makers want to sell handsets
  • OS Vendors want to sell licenses
  • 3rd party aggregators want to sell brokerage (e.g. ads)
  • Carriers want to sell service subscription.

 

It’s not about the revenue from the application.  Apps are disposable – small, cheap, the price of a cup of coffee.  You use them and throw them away.  If you have to get a new one with a new phone so what?  They are cheap.  (Comment – this is in stark contrast to PC applications where the cost of software is very significant and as a result becomes a factor in countering the desire to buy a new PC if that means having to pay all over again for Office and other expensive software.  It is also a real contrast from the traditional telco model which is about long term subscriptions and reducing “churn”.  To exploit this, the merchant has to be agile, capable of selling a lot of applications at low overhead.  Apple’s experience with iTunes is in fact a perfect training for this.)

 

Apps are changing user attitudes.  Things they try for apps become things they now require (e.g. multimedia, location, etc.).  App Stores introduce customers to micropayments.  (Comment – yes, I believe that actually a big part of Apples success in particular is that many of the customers for iPhone already had accounts on iTunes and had experience with it.  I would actually cite another example.  I believe electronic toll payments made a significant change in my behavior and probably many others.  When tolls were paid in cash there was a decision to be made on each trip as to whether to spend the money on the toll road or take the slower “free” road.  Taking the toll road required having and spending coins – very visible and inconvenient.  With an electronic toll account the payment is automatic and painless and you much more easily perceive that the cost of using the road is small compared to the time it saves.)

 

The conclusion of her material was really that whole concept of downloading applications for a low price is a radically different model than the one we are used to and will as a result cause significant disruptions to how we acquire and use communication services.