This is the 14th ICIN conference and the first
held outside of
The conference drew about 150 people, roughly half of which
represent its loyal following who have showed up many years and half new to
this topic. The change of location may
have enabled more people to experience ICIN as
The major themes that I got from this year’s conference were:
Aside on the venue
This is the first year that the conference was not held in
The social part of the program included a Gala dinner, traditional at ICIN, this time sponsored by Deutsche Telekom and held in their facility, which was very good. It seemed somehow appropriate to have a Gala dinner in a building which had once held switching equipment and human operators. There was also an organ recital by Igor Faynberg of Bell Labs, who proved to be an excellent musician and performed an hour of baroque organ music assisted by Hui-lan Lu in operating the organ stops.
(I could only attended a little bit of this excellent program due to attending committee meetings. One change considered for next year is moving the meetings to avoid interfering with any of the program.)
This was taught by the Fraunhofer
FOKUS organization. One key message is
that the telecom industry “value added” in services is decreasing as more moves to the
internet. That’s a problem because
that’s where the value is and where the revenue will be drawn out. The 3GPP Evolved Packet Core (EPC) is maybe
the last real infrastructure initiative capable of adding value in the network. IPTV may be an area for operators, but look
at Apple and YouTube do “over the top” as a model. (Comment. FOKUS had a lot of presentations emphasizing
the potential for EPC to be a platform for both evolutions from the mobile
network and IMS and from the public internet.
This is not a universally held view, as some feel the internet community
will largely ignore it.)
IMS exposes APIs, but way too many and too much variety. The trouble is that most users now have profiles with internet providers (e.g. Facebook) and that’s where the center of services and context for the user resides. They may use IMS APIs, but IMS is likely to be only a small player in this.
One of the problems is the multiplicity of influences on IMS – IN, GSM, Parlay, etc. Another is the openness of SIP – It doesn’t force users into a particular model. (Comment – you could also say that of the internet. Openness creates opportunities the creator of a model didn’t foresee, but it also leads to a lot of redundant solutions)
Many people think SIP when they think IMS, but DIAMETER is maybe more important (DIAMETER access Charging, logging, profile data, authentication, authorization, etc.)
In the discussion at the break and towards the end of the morning one of the things that became clear was that one thing going on is that many capabilities of IMS wind up being re-invented at higher levels in services that run over the public internet or other infrastructure that does not support them (e.g. roaming, resource management, authentication, etc.) This is controversial to the telecom community which sees it as wasteful, but perhaps unavoidable.
Attendance this year is about 150 people. This is very good, but given the quality of the program it should be able to attract a lot more people. One of the key challenges is that it is too difficult to get support to attend a conference when you are not presenting and not everyone can present if you are trying to maintain the top quality program. (Comment – this is indeed the case and a real challenge for the industry. ICIN is rare in this respect in being quite selective about the papers it takes. Selectivity has another benefit, which is that it avoids the need to have massively parallel sessions, which means that everyone attending has access to at least half the presentations and all tend to have a similar experience at the conference and common topics for discussion during the breaks. The downside is that you then must people who are not presenting to come, and many employers are reluctant to support this.)
Best papers were selected through a somewhat tedious voting
process by the TPC. (Comment – I believe the major problem was starting too late on
this. I have no objections to our
selections, but I think we would have had more careful consideration if the
list of nominees (and the actual papers) were distributed at least a week in advance
with instructions to return your vote only to the TPC chair with a deadline
that would allow votes to be compiled.
Voting in the room consumed valuable time, and what’s worse I think is
that I suspect some votes were influenced by hearing who else was voting for
what. We may still want some kind of
confirmation of a selection done via “secret” voting, but I think giving people
enough time to consider it and make their choices independent of having seen
other’s choices would be beneficial in bringing out perhaps more papers.)
There were some discussion on next year’s ICIN. One of the issues is how to get the summary paper out much faster and to strengthen the relationship with the IEEE. One of the keynote speakers is associated with IEEE Communications and has volunteered to help get it published, but the key is for us to work quickly to get the summary out. Stuart also mentioned that there is a white paper from the UMTS forum which he wrote that references ICIN heavily, but it was held up for 6 months because of controversy over an unrelated concept. The key is to act fast! (Comment – the answer is “Just do it”. After the conference a couple of us took the lead in producing a summary conference report for IEEE Communications)
Stuart opened the keynote session by introducing all the
sponsors. (Comment: ICIN has gone from having lavish corporate sponsorships to
relatively lean times, and now has gotten much more support again, though not
what was common in the 1990s, the world is different now) He also gave some information about the size
of the conference, the program, and the venues.
Heinrich Arnold from Deutsche Telekom Laboratories talked a bit about
their sponsorship. The location is their
Heinrich’s statement was that ICIN was the first conference or organization to recognize the value of enabling services – it brings together the opinion leaders in the industry. (Comment – I would agree. The decision makers don’t attend, but the people who have the views for the future and shape long term plans behind the scenes all do).
He made an interesting example about innovation – it’s not just about creativity. He showed a bottle of German wine from a maker that is attempting to duplicate a good French wine and outdo the original in quality, which has won several awards. Innovation can be about perfection of a concept, not just the original concept.
Max is the technical program committee chair this year and introduced the theme and the selection process for the program. ICIN is a very selective conference, accepting typically about 1/3 of the papers submitted, which are a very strong and selective set to begin with because of the reputation of the conference.
He introduced a video from Malcolm Johnson who is with the
ITU and could not attend because he was attending the ITU plenipotentiary
conference in
One of his interesting observations was that “Consortia” standards, which dominate this world, tend to be regionally focused without interoperability. (Comment – I think this was a not so subtle dig at the situation in the US, and maybe the EU as he went on to site a lot of 3rd world and developing world countries which implemented IPTV to ITU standards and the benefits they received from it.)
He made a strong plea for maintaining support for standardization as a way of reducing costs in the long run and improving customer experience and therefore getting loyalty, in an environment that often challenges participation in these activities. (Comment – I think this was an excellent presentation, and more to the point than the standards keynotes often are, covering the “why” of standardization and the impact of it without getting lost in the details.)
Ironically the talk was delivered by Philipp Kelly, also of ALU, because Alan had become ill because of something he ate. The talk was about getting to market and things that go wrong, with an analogy of getting fruits and vegetables to market. “Apples” are the applications, and the telcos and application stores are the channel.
“Telcos spend 85% of their capex in developing internet capability, but only 15% of the traffic through their networks is telco related. How do you proceed?”
An early answer was
He reported on a survey teens on applications, concluding “the fruit isn’t ripe” – 62% are frustrated with their experience with Web2.0 applications because of bugs – video freezes, long delays, too many passwords, too much redundancy.
Consumers buying fruit are willing to pay a premium for fruit free of “blemishes”. Are application customers going to do the same? (Comment – food is a health issue, I think the motivation for blemish free fruit is a lot stronger for a perfect recreational IP Video application, but I may be very wrong about where users put their money)
“Fruit needs fertilizer” (i.e. money in the application example.) Are users willing to provide it? According to their data yes, users are willing to pay for premium applications, mostly based on acquiring the application once for unlimited use rather than paying per-usage.
Another lesson – don’t stick users with particular platforms and applications – Fruits are good, but a balanced diet requires variety.
“No matter what technology is used; your monthly phone bill magically remains about the same size” (Comment – I’d add to this “And your cable bill will go up 10-20% a year!” That is the challenge. My phone bill is the same as it was in 1980, my cable bill is probably 5 times as high! I think that’s typical. The telecommunications industry has always needed to figure out how to deliver things that customers are willing to pay more for, rather than simply deliver familiar services in a new way, which does not get users to ante up more cash)
Economy of scale is important, what offers the best economy
of scale? “One Google and One Nokia, but
800
(Second Comment – I
firmly believe this is a major reason why the technology industry in the
Internet companies can reach the whole world with a few hundred people to support the whole world. Network operators may claim to be global, but all are regional, even the biggest get only 30-40% of the markets in which they participate, which are not all the markets in the world.
You hear about proposals for a “Google Tax” – if internet
players get all the money, why can’t we tax them to support the telecom
infrastructure that enables it? The
answer is that it won’t work, because the internet players aren’t getting
enough money. Google has ARPU of .08 Euro per month (2008), 225M customers, vs
18.6 Euro for 5.3 million customers for Telia (
He claimed that Operators collect much more money than internet players (80% of the pie) and projected that situation to continue (maybe decline to 70% or so).
The “Golden rule” of markets – number 1 player is 40%, number 2 player is maybe 25, number 3 15%, number 4 10%. (Comment – this is essentially the “rule of 3” from business school thinking for years). His claim was that the number one player in the telecom industry was almost always the legacy national carrier (or former monopoly holder in countries without a national player), while others positions could be tied to when they got their spectrum licenses. “This is an infrastructure business that is all about people buying and building out infrastructure that requires a huge capital outlay”.
One of the real issues was the “crazy” spectrum auctions that generated piles of money for government but drained capital out of an industry that had trouble building out that structure. (Comment – interesting. The group I was with at Bell Labs in the late 1990 had some people making long term projections and studying those auctions and expressing exactly this concern about the future)
“With each technology migration, the market becomes more entrenched and entry thresholds increase” The view here was that because of the huge infrastructure investment in towers, right away, etc. we are unlikely to see any new entrants for 4G and beyond. (Comment – an interesting view – this suggests mobile service technologies have been “sustaining” rather than Disruptive. I really wonder whether a combination of a new technology like WiMax and mesh networking (i.e. end user devices relaying messages) might create a disruptive entry option in which someone can enter the business with a small investment and a very different technology.)
Conclusion: ARPU can be stabilized but costs will not, and costs follow the same growth pattern as traffic. (Comment – this presume you get no benefit from technology which crams more bits over the same infrastructure. I don’t know about that.) The “bit pipe” model has been much maligned, but maybe it is sustainable – just charge customers for what they are using and be of value to them.
“The internet is Broken” (2005 article by David D Clark of
MIT) (Comment – oh yes, Dave Clark was my Ph.D advisor and I had an interesting dialog with him over
the paper. I claim it’s not the internet
that’s broken, it’s the bad software monopolies that
surround it (i.e. people making bad choices about critical software platforms
that were never designed for the environment they now operate in.) Dave wasn’t quite willing to concede that,
but did see my point. I believe the real
problem with that much quoted statement is that it’s
way too broad, and people read too much in it to support whatever it is about
the internet that their own work or company is trying to “fix”
Two big challenges for the future: A variety of new applications and endpoints, and cloud computing (i.e. putting everything on the network means you have nothing if the internet doesn’t work properly for it.) The internet architecture is a patchwork with lots of things put in over the year that breaks the design principles. One of the proposals here is to start over from scratch: Nets, FIND, GENI – “Clean Slate” internet research. (Comment – the current issue of one of the computing technical magazines has a debate on whether this is the right thing to do – it’s timely)
Some say the future is here: GENI. (Comment – I wonder. “Internet 2” was a similar initiative, but what it did was mainly just absorbed into the internet. I believe opportunities to start over clean are few and far between. The value you get has to FAR exceed the present to make it attractive to make a disruptive jump. Fixed phones couldn’t incorporate some of the things now common on mobile phones because doing so would make them cost too much, and users wouldn’t accept the cost increase for some marginal features, but mobility is so valuable users are willing to pay more and accept a service that isn’t exactly compatible with their expectations from the fixed world.)
He had lots of description of work in EU,
He began with a story about being an academic, getting tenure, and recognizing that it made little difference to him, so when he got an offer to become an editor for Nature he made a move to the commercial world. One of his first duties was to write a piece for the Times of London on the Nobel Prize in Physics, which was awarded to a friend of his. He gladly accepted, thinking he had 3 weeks to do it, and was shocked to discover he had to do it in 4 hours. Are we as an industry facing the same kind of shock, because the industry we are joining is moving much more quickly? His observation was the 3 visions given by the speakers were all of the “academic” view – take your time and get it right, rather than the need to react with speed driven by competition.
Bengt We can’t get around the need for standards, and standards are by definition serious. He felt standards will always be important, what we need to realize is that it’s not just the operators that will form them. Stuart responded that we are working on standards for rich communication services, have been for years, and after some time we will be able to replicate what was available from Skype 2 years ago. (i.e. can we afford to move at the rate we are moving?) Bengt probably not – Apple is launching Video service, and are likely to rapidly exceed the subscriber numbers of 3G on Video telephony and claim they invented it.
Stuart asked others about the statement: “Is what we are doing in standards too long and too complicated to avoid getting bypassed by the more rapidly moving players?” Thomas – is Skype successful because they just did it, or is it really that they found a solution to allow it to be used behind firewalls and NATs while others just relied on SIP which required constant intervention by users to open up the right ports. (Comment – this really is support for the theme that to succeed, something has to be simple – it can’t require that the user learn to do anything complicated or that they do anything tedious, including paying too much for it)
Philipp asked Stuart if he made that 4 hour deadline – the answer was yes, but only because he had actually worked with the Nobel Prize winner for years and already knew all about him and his work. He also said that’s not the point – the world that we all came from in the communication industry was based on slow processes that “get it right”, but we are now driven by competitors with a very different way of operating.
Phillip – what do I do if I want to have a video conference with someone who doesn’t have a compatible device or application? (Comment –it’s a problem, but Skype doesn’t care. One of the key differences is that telecom operators have a small set of customers and a universal service mindset that what they deploy has to work for all of them. Skype has access to a huge universe of users and doesn’t need to worry about the fact that some of them have old equipment or slow or choked connections and can’t use their application because the universe of people who can use it is more than large enough)
Stuart’s answer was it’s all about regulation – regulators won’t let carriers pass 40% market share, which enforces this division of the market that causes operators to need these slow collaborative processes to do anything. Heinrich – it’s obvious why operators don’t do Skype – it doesn’t return anything (well not immediately). One of his points was that operators like to do big things with universal appeal, not lots of little applications. (Comments -- yes, there is no killer app that you can identify. Another interesting piece of history. The old AT&T would license technology to others, but the process was so cumbersome it could not do it cost effectively for less than on the order of a thousand dollars because each sale required significant effort to work out the contract. That priced it out of the market for most innovative software technologies today)
Stuart made an interesting point – 10 years ago location was viewed as a key operator revenue producer, but in fact the operator has been completely bypassed as users have purchased GPS enabled devices that get their location without help from the operator.
Question (Chet McQuaide) -- He was struck by the comment about the ARPU from voice being constant. He felt this wasn’t the case always, but close. His piece of history was that when they introduced a new service 800/Freephone, it created a huge revenue source for operators but didn’t raise any individuals bills. Are there other opportunities like this?
Phillip – their panel of teen users were supportive of 3rd parties sponsoring premium services. Bengt – He gave a lot of discussion on advertising and sponsorship as another potential for a new revenue stream. One problem is that the overall operator revenues dwarf these kinds of returns. Heinrich – operators have the opportunity to expand through “packaging” if they can put together attractive packages bundling devices, bits, and services they can compete effectively. That’s not what’s happening though, instead they are trying for exclusive deals (i.e. be the exclusive distributor of iPhone), which unfortunately fragments the market and encourages bypass.
Philip – isn’t Google search a sponsored service? Isn’t that a profitable service? (someone else noted the Android platform is the same way).
Comment (DT person) – There are two problems with the “Google doesn’t steal from telcos” argument. The first is that telcos individually aren’t as large as reported. The second is that Google requires resources from Telcos to deliver that service but they aren’t paying for those resources.
The second half of the keynote session presented speakers
from the host country (
His theme was that they want to become more like a software
company. He went through the typical
software company development process, which are focused around APIs, and went
through a set of stages to identify needs and deliver applications. Interestingly enough he used location as an
example, talking about the ability to use maps from “the cloud” using mobile
data and location from the mobile phone (Comment
– interesting since others said the carriers essentially lost this market. I think in the US stand alone GPS devices
with the maps in the device dominate, but the amount of international travel in
Europe vs US may make a big difference in the ability
to hold enough information in the device and keep it up to date.)
He talked about their “Developer garden” initiative.
He talked about the appearance of innovative voice services (different kinds of routing, call handling etc). The message being that innovation in voice isn’t over. Some things he highlighted were language translation, text to speech, and unified messaging. (Comment – I still don’t get unified messaging. I get 100+ emails a day and there’s no way I want to see them on a mobile phone or have to wade through an audio interface to deal with them, but I may not be typical)
Digital money was another area for innovation – the phone as your wallet. (Comment – about this time my laptop put up a blue screen. I had a momentary panic. I can’t imagine what would happen if my finances were entirely dependent on an electronic device which failed on a trip in a country where I don’t even speak much of the local language. Are people really ready for this? It will clearly tax the ability to provide reliability and security). The telco is a natural player in digital money because of their existing billing relationship with the customer (Comment -- Yes, but there are other competitors for this position)
He talked about the trends to digitize automobiles and the home and potential roles for the carrier here. The carrier is a natural player for entertainment and content in this.
Final thought “ Don’t ignore the internet” – he talked about how the mobile industry tried many times to do that in developing walled garden approaches that limited user access – they lost every time.
His keynote was on identity management and Web2.0. He talked about how communication technologies go from being cutting edge for early adopters to being completely absorbed in our lives. He gave some examples, with mobile phones being at the “absorbed stage” , and iPads being at the “early adopters” stage.
A typical user has multiple aspects of identity for different purposes. He went through a lot of identities a user might have for business, for communication, for family, etc. The number one problem the user has with this is too many user IDs and passwords. (Comment – right on) Secondarily there was distrust – to use something the user discloses information and that carries risk. The need to disclose personal information to multiple sources and especially to services the user does not yet trust is a major disincentive to use. He had an interesting update to the old cartoon showing that “on the internet nobody knows you are a dog”. Now people you deal with on the internet are sharp enough to figure out you are a dog, what your breed is, and sell you specialized dog food. Big brother is watching.
80% of users think privacy is very important and 76% are concerned about mis-use of their private data.
Who do customers trust. Curiously enough Banks still score the
highest (even with recent bank and financial scandals), but the Mobile ISP
scores very high along with Google and on-line communities. His view was customers trust telcos because they pay them. (Interesting,
I suppose the feeling is that users won’t do business with someone they don’t
trust but they might participate in anonymous transactions with someone
untrustworthy. There was a question in
another session about this view in that the feeling is that trust of the mobile
operator may be regional – in the
He talked about what it took for operators to build a circle of trust including brokering identity, tracking usage to predict what the user wants to do and help them, and then personalizing the users experience around what they want to do and how they want to use the devices.
Can telcos do this.? It requires speed, and they have regulation to deal with as well as a variety of capabilities. Standards take way to long for this world. The bottom line is that it’s a competitive world and users will decide what they want.
The digital information era is coming (or already here). Information is now critical to many things that don’t seem obvious, like farming or manufacturing. The network can bring information, but can expand to bring knowledge and even Wisdom. (i.e. getting expert help from the network. Cloud computing is creating a new opportunity for the network. One view is that the migration of services to the cloud and to the terminal leaves the network out of it, but instead the reality is that the network has the opportunity to add significant value as an intermediary by supporting the ability to deliver (distribution, finding customers, brokering access, etc.)
The digital era is creating a lot of surprises. One he showed was prediction vs actual in usage of the Apple store – reality was 13X prediction.
Stuart Sharrock made some comments – The presentations were mainly focused on Tier 1 carrier needs. What’s the reaction from second tier players.
Thomas – his view was second tier players could not support
everything, and would outsource things that had economy of scale, mainly their basic
network operations, to the tier 1 players who had economy of scale there, and
focus on value added service. (Comment – interesting. In
the
There were some more questions about what a software company and what an aggregator really looked like without definitive answers.
This year I was given half an hour to give the highlights and advertise the poster/demo session. I think this was a useful thing to do and I was glad to have the opportunity. We had an excellent set of presentations this year and with a reception Tuesday Evening in the poster/demo area much more time for people to interact with the presenters.
This was the first session of the morning on Tuesday. After an excellent networking reception Monday evening some excitement was needed, and the session delivered it.
Kris Kimbler – Kris was the founder of Appium, one of the successful application development companies spanning the IN and IN+Parlay era, and an outspoken speaker at ICIN events for years.
Today, there are about 80 app stores, 300K applications, 20K new applications per month, 60K developers, 6 Billion downloads and $2Billion in revenue (Estimated by the speaker).
Apple is really dominant (225K. Android is 30K and growing (It’s the fastest growing by percentage, not in absolute numbers.)_ Apple’s success is a combination of a good platform and an eco-system. One big factor is that the appstore client comes pre-loaded
in the devices, something others didn’t do. (Comment – I think the existence of iTunes and the huge number of people familiar with it and with accounts to start with had a lot to do with it.) Apple operates on a revenue sharing model: 70% for the developer, 30% for the app store. This is a radical departure from what happened in previous technologies where application developer got some fixed fee and the operator got any upside if the application proved popular. The revenue sharing model really jump started the app store process.
Some interesting things; The App store lagged the iPhone by over a year – Steve Jobs was against it and lost, but debate in Apple delayed its introduction. (He has a reference in the paper that covers the history and Steve Job’s role in particular). Apple has the lion’s share of developers (45K vs 10+K for Android). There is currently very little overlap among developer communities for platforms, and those who do multiple platforms probably represent the big gaming companies and others with lots of money.
While iPhone has a lot of hype, all smart phones together will only be 28% of the mobile handset market in 2013. Apple is only a small fraction of that. (Android is projected as larger than Apple by then).
The reality of application stores: 15% of the smartPhone owners download regularly, and games out number communications applications. (Other speakers had statistics showing that a huge number of those applications are little variations on simple things and that the biggest categories of applications are “flashlight”, and “address book”.)
He has lots of data on costs of applications: with Android 60% are free. Most are less than $2. The average price is $4. Figuring the Apple numbers, the total amount available to developers is about $1B, but most goes to the top few companies. Most will not get enough from the application stores to cover costs, but they develop anyway.
The business case for Apple is that they get half a billion from the downloads, but their hardware sales are $25Billion. (i.e. applications are important, but mainly they are important because they sell the hardware, which is 98% of the revenue.)
Application developers are now hard to attract, everyone has a store, and there are too many choices. Money isn’t enough to attract developers, you have to offer more: Maybe the store can help with advertising. Maybe the store offers some kind of celebrity affinity “coolness” for developers.
The most probable model is that most operators won’t have an app store, they will let customers download from the big stores. An intermediate option for operators is to aggregate (i.e. one store serves multiple operators), where the operator may get only 10% of the revenue (because the store operator gets a large share).
The average iPhone spends $25/year
on apps. But ARPU in
The conclusion here is app stores won’t generate much revenue for the carrier. They are really there to help sell phones. That’s true whether you are a carrier or a handset vendor. There’s no business case for a carrier based application store other than “Pride”.
Rebecca is another familiar speaker in ICIN, now an independent consultant.
She gave a lot of the same figures as Kris, but had a very different focus on it – look at all that money going to developers and Apple.
She described her own experience – user overload – she wanted a currency conversion application, went shopping and found hundreds – what should I pick? On the positive side she said a big factor here is that the app store has a micro-payment model – it’s good that the apps are the price of a cup of coffee, because that’s the level that people will actually pay.
She described a few applications – many combined the camera with communication and hid the use of MMS to upload information from the user, making things very easy: Buy houses, contact your garage, report graffiti, etc. Another one automatically reported bumps in the road to the transportation department. Lots of these were linked to other companies or government agencies (i.e. the user may get the app for free because using it benefits someone else!)
Lots of these are regional in focus. The app is only relevant for a small group of users (That’s part of why the fragmentation of applications)
Lots of apps essentially replace advertising and one big benefit is “click to connect, which avoids the user getting lost trying to find the right 800/freephone number to call to purchase)
One thing about stores she said is that there is a difference between who puts applications on different stores – 4% of apps on ATT are games vs 56% on Apple.
Many applications are small, simple, one-shot use – (Interesting – adapted to what I term “the short attention span generation”) Applications are disposable. This is an alien model to a telecom world which looks at long term subscriptions, contracts, and 40 year lifetimes.
More interesting facts from the user:
What are they spending 79 minutes on? Monitoring – watch what your pets (or children) are doing at home. (Comment – I’ve read about things that generate Twitter IDs for pets and periodically post tweets from them. I wonder how much of this is just a pet rock style fad, but I suppose that’s not important)
Lots of developers are going towards Android – the message is developers are fickle – they go to where the biggest market is in their view. Only 1% of applications do “useful” things.
She had some interesting statistics on application stores from different vendors – there’s almost a price war going on from the manufacturers and the app stores are offering many applications free just to entice buyers.
Application stores represent a threat to the carrier because the relationship with the application store means the store knows more about the end user than the carrier. There is a lot of concern about “Over the top” applications cannibalizing carrier revenue, but it may not be a big threat because in reality those calls will cost more (i.e. Skype over mobile data costs more than a phone call especially when roaming).
Her bottom line was that application stores are synergistic with carriers – friends they have to have.
Question: (SAP person)
– who in the audience has developed a mobile phone application? (maybe 5 hands
out of 50 people) He went on to talk
about lots of problems in doing this – lots of overlap with other applications,
issues learning and using the language and tools, etc.) Rebecca made the point that building apps
isn’t new, what is new is the open structure which allows many overlapping
applications to get to the market and the market to decide who is best. (Comment
– I wonder how users make those choices, the number of choices must be
overwhelming.)
This presentation covered the servery
CELTIC project which creates a common market for northern
Servery seems to be aimed at
supporting the whole cycle, from development to sales, with graphical tools for
the developers to use network assets, browsing tools for customers to help them
sort out which application applies, and sales and revenue sharing support. (Comment
– there was such a contrast in style between this presentation and the first
two presenters that it was difficult to
follow, but this work definitely seems to be in a very different mold of the
operator controlling the whole process, not the typical internet model.)
The rest of this presentation went through the details of how development is done and applications are delivered through a combination of IMS and Web2.0. This was in my view a very conventional discussion of carriers opening up their networks and exposing assets through APIs.
The issue isn’t whether or not to have an app store, the app store is just a mechanism, it’s how the user finds the application. She was much more interested in what was the business model behind the creation of the application and sale of it. App stores are the tip of the iceberg.
She gave a lot of figures on usage. In 2015 over $1Trilion in mobile content, $13 billion in location services in 2013. Big growth of social media, etc.
That’s not the problem though, she showed growth curves that show 3% growth in voice revenues (if you are lucky enough to be in a growing country). Data revenues grow at about 13%, BUT usage is growing at 131%. Therefore there’s a disconnect. (Comment – this is interesting, because for years there was a view that we couldn’t sell enough mobile data to tax the infrastructure that was already there, but in the past 2 years we have moved way beyond that to operators getting a black eye (e.g. AT&T in NYC) over being unable to keep up with the tidal wave of mobile data demand.)
The internet defines new business models and “disintermediates” the relationship between the customer and the producer. (Comment – I recall a decade ago getting that word in a presentation I had to give for someone and not knowing at all what they were talking about. It now seems a natural concept that the internet eliminates the middleman, and in many cases our industry was the middleman)
Network providers have become very aware of application stores and are very much looking at the application store model as a way of getting new revenue.
What do developers look for: Mostly the number of potential subscribers, then finances, fame, lots of other things (The paper has the full number). (Comment – that’s the real challenge for the carrier, independent stores may reach more potential customers)
Fragmentation is a big problem – too many devices, too many incompatibilities with carriers, too many different restrictions on usage of capabilities, etc.) (Comment – but remember the view that most developers don’t deal with multiple devices at all!)
They concluded there wasn’t necessarily a right business model and looked at >120 different ones. It came down to 6 categories:
(She had a half dozen or so logos in each box indicating who played there). All can be profitable.
She went through some characteristics in a matrix describing who was responsible for what, (again much more is in the paper.) One point was that the network provider led model, while popular, has the highest cost for the carrier because it puts a lot of work on them. It does give them differentiation, but at high cost. Aggregator can give many of the same benefits, but by sharing the work across more players the aggregator can reduce costs (Comment – might also be more attractive to developers who don’t have to work with a lot of small network operators just to reach their networks) Many companies really underestimate the cost of supporting a developer community.
She went through a few more characteristics of those two models as examples.
The bottom line was that what the right model is for any given carrier depends on their business goals – increase revenue, save costs, grow brand, etc.
Roberto Minerva (Telecom Italia) – not sure he shares the view – app stores from Apple and Android are overlays, they don’t care about connectivity or the network. They are totally focused on the terminal and network and they will build whatever they need over the top. To do this the actually create a rather closed world – there’s an Apple or Android way to do anything that is specific to the platform. (Comment – yes, I think that’s right, they don’t tend to use anything from the operator except at arms length) Operators instead have to spend a lot of effort on infrastructure and interworking. They spend lots of time on creating APIs for interworking. Should operators emulate Apple?
Rebecca – multiple APIs to deal with. Even Apple has multiple versions. The network operator needs to be more focused on APIs for their capabilities and making them available to developers to use in new ways.
Unfortunately the first speaker from this session who was from the Berlin Institute of Technology talking about a research project, took ill and could not present. The two remaining presentations included one on efficiencies in mobile content delivery and another on the challenge of synchronization in IPTV for social services.
Wipro is a software developer in
He gave some statistics of mobile usage in
What are the challenges? One big one is the variety of devices and the rate of introduction as well as the growth of content – lots of overhead to support everything.
There are 6,000 handset types, but only 70 types do 90% of the download, the others are mainly just overhead to the content providers. A typical content provider has to provide 80 different formats for things like “wallpaper” (Comment – interesting, I wonder to what extent this is an artifact of the second generation technology – i.e. will things become easier with smartphones where the phone can do more of the work of customizing content to the capabilities?)
The focus of their work was solving this problem – reducing the number of unique combinations to be supported and introducing a way of producing a best fit mapping that made things easier for the content provider. The result decouples the content providers from the variety of devices.
Their framework uses a rules-based mapping which scans incoming content and produces a small number of variations in the content management system, then a second rule engine drives the customization of content to deliver to the phone via WAP. Their research was to determine the optimum number of variants to store and manage to allow the right information to be generated for the phone when needed.
There are some real problems – Copyright is an issue since the operator may not be entitled to manipulate the content, which prevents dynamically transforming content to fit a particular device type. Another problem is that the need for online transcoding might be bursty (i.e. if some popular content gets downloaded broadly the delivery system may need to reformat it for all the “unpopular” devices to serve the burst.) This was not the case in their experience.
Question (Me) – does the problem get worse or better as the market migrates to smarter phones. Answer – if the phone can do the transcoding and take more on itself that relieves the carrier from it, but that’s a long time off in a developing market like India, and in particular they are still expanding GPRS/WAP capabilities into second and third tier cities where affordability of mobile services is tight and this solution is an excellent fit to deliver content at the lowest costs there.
Question – how does the system know what device is being used? (It’s in one of the fields in a WAP request).
Or “A network-Based
Synchronization Approach standardized by ETSI TISPAN”
For now, TV is an individual experience, but many find it unsatisfactory. Social TV is a way of sharing the viewing experience with friends in other locations. (Comment – I’ve seen presentations on this concept for years, even done work on this myself, but it’s not here yet.) He made the same point – it’s not here yet, though the capability exists to use Skype for communication while watching TV or share watching a YouTube video.
Social TV introduces the need for group synchronization – the problem with group watching is that different viewers may see the same thing at different times due to differential delays in the network. Interestingly enough he showed a graph of delay versus delivery technique for real time video. IPTV or satellite generally deliver content before broadcast TV, while every other medium is behind broadcast. (Comment – Yes, this is a real problem, but the largest differential delays shown in his chart are on the order of 2-3 seconds, much shorter than it takes the user to communicate with friends. On the other hand I wonder about the situation where at least one participant doesn’t have the bandwidth for continuous video and continuously falls behind the others. That’s where I think there is a real problem but I’m not sure how to solve it.)
He talked about various approaches to group synchronization:
All 3 are specified by ETSI-TISPAN IPTV specs.
He showed a reference diagram with protocols in SIP to do all the needed work. He gave ETSI standard numbers, reference points and information on how all this can be done.
Question: Kris – is this a real problem? Consider all the effort spend on Voice Call Continuity to handover calls from WiFi to mobile – the real need for this turned out to be quite rare in comparison to all the effort spent. (Comment – yes, when I was with Personeta, they claimed correctly as it turned out that the need for this was quite small in comparison to the need for having the same service on multiple devices and the ability for customers to hand calls between devices explicitly) Answer – he’s not sure how much the real need is. Nobody knows whether anyone has implemented the standard.
Question – does this solution work when end user uses different Codecs? Answer – yes, this is probably the situation where it is needed because one user may be much slower because of transcoding needs.
Question (Deutsche Telekom person) There is a process to synchronize the decoding/presentation to a global clock, what does this do in addition. Answer – He didn’t what process the question referred to. (Comment – I think from the paper they considered the ability to synchronize without a global clock a requirement.)
Question (Chet McQuaide) for first speaker – how do you define “best fit?” (In an aside he mentioned that some content providers (e.g. Disney) have standards for what’s the minimum acceptable way to deliver their content and “best fit” might not meet that).
The answer was that they defined it ad-hoc based on user feedback. It turns out users aren’t fussy – the main issue is can the user get the content downloaded and is the image good enough in the eyes of the user.
Question (Michael Crowther) – is the real problem for synchronization with users connected over different technologies? (Answer – yes, this is a bigger problem.)
Question (me) – what if one of the users won’t keep up in real time?, what do you do? ) Answer – we have to support this, but it’s not clear how.
Question (NSN person) Did you consider how to structure content storage (i.e. where to put it). Answer – to this point the focus has been on the SDP, which is up in the service layer and centralized. There are efforts to address the distribution of content delivery functions in an optimal way, not covered by this project.
Question (Anders Lundquist) – we seem to be going away from real time except for sports and towards video on demand. How does that trend effect this service? Answer, this kind of thing is happening ad hoc today without benefit of standards. Anders was looking for an answer to what happens when 2 people decide to watch something on demand and are out of sync. The answer was that the synchronization they are looking at was on the order of 10 seconds, beyond that it’s more like what happens when someone comes in on the middle of a movie playback and the two decide together whether to restart the movie to watch together or have the first one just join in progress.
Another comment – this is a good first step, there are clearly lots of additional needs for synchronization.
Another comment from a FOKUS person – the synchronization issues are much tougher in Gaming. Would this solution apply here, or is there something from Gaming that should be applied to TV? Answer – Gaming is a very different situation, it won’t tolerate gaps and is very real time oriented. Good gamers adapt their behavior to delays in the network (e.g. firing ahead of getting in range) You don’t want the system to introduce variable delay that will frustrate that.
As the sponsor of Lunch on Monday Huawei
made a presentation focused on their research and development centers in
Huawei has 40,000 employees in
“R&D”, with 10% of revenue funding R&D.
They have 22 R&D centers around the world, 4 in the
KDDI is a major communication company in
He talked a bit about the communication need for the phone to talk to the content server in the home and pick the right things to play and then control. He then went into an architecture to address it and several functions that were needed:
They did an experiment involving the need to start by
establishing start times for playing and then have to change devices along the
way. All this seemed to be done with
HTTP as a control. It took about 1
second to set up the play time and 2.4 seconds to change device time. (Comment
– because of some language troubles and a lot of detail in the slides I’m not
positive, but this seems very slow to me.)
His conclusion was that the times were acceptable and you
could set up to do this. (Comment – this seemed like a lot of effort
for a rather odd service. Maybe this is
more natural in
Home networks supply a lot of social needs for people (games, home automation, security and communication. A key aspect of home networking is connection to the global NGN/Internet. It’s a door to the home.
ETSI/TiSpan considers a home network to be like a Customer Premises Network and a part of the NGN from a standards perspective. It has a gateway (CNG) and multiple devices (CNDs) behind it.
Why is security important and who cares? It’s important to the customer because it protects their privacy, prevents fraud and especially protects children from unwholesome influences. For the provider it protects the NGN from abuse and fraud, and protects their ability to meet regulatory requirements.
Home Networking is complex – many devices, many protocols, many services, regulation, network address translation, etc. They used a Threat/Vulnerability Analysis to look at the requirements and potential solution.
The main threats include:
The countermeasures to these threats were enumerated by TISPAN, with the focus on putting the burden of security on the Customer Network Gateway. This isn’t perfect because it creates problems for some services like remote administration.
The conclusion here was that there are many issues and the
network operator can help customers protect their home network. (Comment
– I think the real problem is this stuff is very complex for the customer who
is looking for simplicity. I don’t
honestly know how most non-technical people manage to set up home networks
given the amount of effort required.
Most ready made solutions assume you start from scratch and have a
single vendor, but by now most real users have a patchwork of solutions and
have to get them to work together.)
The context to this work includes point to point services that need to get at home devices behind the home gateway. Prior work includes work that uses web based solutions (big security problem and each service has a unique architecture and services). Solving the problem at the Home gateway has ben a problem.
Their proposed solution allows a service to share multimedia contents between terminals connected to different home networks. It’s transparent in a commercial terminal, and it’s secure and preserves QoS.
Their solution adds a remote access server in the network IMS that manages access rights between home networks (i.e. defines who can access which network). In the home gateway there is a remote agent (A Sip UA that is accessible to the outside that can respond to requests), and the Virtual Media Server, which is there to stand for media services behind the gateway and deliver the requested information outside. The solution follows basic procedures to establish a kind of Virtual Private Network between the using location and the remote home that they are accessing, with the remote agent and VMS completing the connection between the locations and acting as intermediaries to relay information.
He went through a long scenario of browsing a media catalog in another home and playing of selected media. They prototyped this using OpenIMS as the core network, and building the two new pieces (RA and VMS) to demonstrate feasibility.
Will the UPnP service work anywhere or must they use your gateway? Answer – it is workable anywhere, but you have to have the components on the gateways you want to use.
Another audience member asked Mr. De Lutiis about another ETSI project on home security and how it related. The bottom line is that there is no perfect solution, what they present is a framework, but it is up to the network operator to select the best framework for their particular needs and network.
Another question was basically to the whole panel on how the telecommunication operator gets revenue for these activities? (I.e. this is nice work, but will users pay for extra security, UPnP, network control, etc.) Rebecca’s answer was that anything they do which increases usage has the potential to generate revenue. Mr. Mahdi’s response was that his service allows they operator to deliver a premium service which is more valuable to the end user. The architecture potentially applies to both home and commercial applications, including home automation and VPN “work at home” situations. The response for Paolo De Lutiis was that security is a key need, and while now end users might not pay for it, in the long run he expects they will because they will make more and more use of the network including some valuable financial things. (Comment personally I am a luddite when it comes to on-line finance, because I KNOW how insecure the underlying machinery is and as long as I can avoid opening on-line access to financial accounts I will do so to avoid compromise, but that’s not the way everyone operates. Specifically, one of the interesting comments at lunch was that eBanking is very popular in Pakistan, even though most people don’t have their own mobile device for access, they have to borrow someone else’s. For now apparently social pressure prevents obvious abuse.)
Rebecca tried to get the KDDI speaker to address the question of whether users would pay for coordinated delivery of the sort he overviewed. I don’t think he knew. He said this would be part of the services provided by the operator (Maybe what he means is it is used to differentiate the operators offering.)
Hans Stokking made the comment
from the audience that all these things help differentiate the offerings of the
operator from competitors and help stimulate more usage. Someone suggested that these kinds of things
might be subject to “Viral Marketing” popularizing these services, and that all
of this might become a marketing plus. (Comment – true enough, but I think this
really is rather weak. Ultimately I
think you need to decide if people will really pay for something. If they won’t, then it doesn’t matter much
whether you provide it or they download it free off the internet.)
An
Rebecca commented that the coordinated media service would really be a natural candidate for use in marketing.
There was some discussion on other uses, like using coordination to improve remote playback of events for parents, or media sharing in conjunction with social networking.
Kris now represents his own social networking company, Azouk networks. The key question he posed for the session was: How should service providers relate to social networks? Are they a threat? Can they be turned to benefit the operator?
The case for rich media – used to be that people would watch TV or see movies independently and then converse about them over the fence or around the water cooler. Lots of technology now supports communication, but it hasn’t so far replaced the informal socialization.
He talked about a project (TA2 or “tattoo”) they were involved in that had a lot of devices and services to allow people to interact with each other that they have been using to investigate scenarios for services.
What are the requirements? At the low level:
Next up is service management and multimedia composition (managing who is in, who is out, what the media are associated with various flows, then on top Orchestration.
They have four rooms in
He presented a lots more detail
about how the whole test setup worked. (Comment. I’ve
seen lots of setups like this, from the MIT Media Lab to conferencing testbeds in
He showed a short video clip on a demo of their system. One interesting thing was that they used the audio localization of speaker to pick the view being transmitted. The narrator claimed this was the first such system to do so, however, I know the crude video conferencing system used within the old Bell System in the early 1980s had a similar function, which had a set of microphones at the conference table and used which one picked up the strongest signal to pick which camera view got transmitted. (Again, an interesting capability but unclear what real value it provided. The one I experienced was error prone in that the microphones picked up every noise in the room (including ones you didn’t want anyone to notice) and often showed the wrong view.)
He covered a lot of work in progress including building a more lightweight implementation, reducing video delay, and going to multi-way communication, and figuring out how much bandwidth they really need, since up until now they have simply used 40 megabits.
Question (Dave Ludlam) – when will this become practical? Answer – no idea, the focus of the research is sociology, not technology.
This work is part of T-Systems mobile internet Competence center, which has done work on business and technology consulting, mobile applications, service platforms, and operations and support. The focus of this work was addressing how retailers can get the word out on their offerings in an environment which is squeezing out budgets for traditional advertising.
Some of the opportunities here are to take advantage of the capabilities of smartphones (GPS, positioning, lots of processing, good display). They are not just trying to build another application. He gave an example of car companies offering a buying app – it’s a one time deal, the user might download it while car shopping but it’s not useful other times.
One observation – marketing messages mean more when delivered by trusted friends or family, so the real goal is to figure out how to get users to share them with their friends.
Some things that work: Loyalty cards, mobile coupons, search and navigation, entertainment, and things that build community.
He saw a lot of focus on things that helped companies retain customers – the internet has made it way too easy for customers to share their experiences and as a result failing to satisfy one customer may result in loss of many sales.
Another opportunity is scanning bar codes with a mobile device to get more information on the product. (Comment – yes, these are becoming much more common, as are bar codes in publications and ads. It’s funny that over a decade ago a device to do this, the cueCat was distributed to millions of people free, but nobody could be motivated to use it.)
He talked about using social networking to share ads and coupons with friends which let to better delivery.
He presented some information on a demonstration he called the Bargain Hunter Application. This was a bit odd. The user could look for coupons, but before getting the coupon the user had to solve a small riddle to prove interest. It allowed users to share coupons via social networking. (Comment – I don’t understand the riddle part – I would think that any barrier to getting the coupon would be viewed as an obstacle and cause users to give up. Maybe most people aren’t quite as hassle averse, but I personally have a long list of companies I will never do business with because I don’t like the way they advertise or because they are simply difficult to deal with. That’s a hard stance, but in those cases there are always way too many choices, and avoiding ones that cause me hassle is one way of coping with that problem.
Question (Kris) – is there really a space for viral marketing from mobile operators? Answer – maybe not, Facebook does this already. Maybe the better way to do it is to use Facebook or Twitter as a channel.
Kris asked whether attendees knew whether their own companies had a presence on Twitter – few knew. Kris’s claim was that companies were not using social media effectively to communicate. It was being used with consumers, but not others.
The Web is “going social”, which contrasts to the walled garden approach used by mobile operators in the past. Platforms are open and there is federation among social networkers. (Comment – really? I wasn’t aware that Twitter and Facebook were anxious to share. Each has APIs and you can build to them but each maintains control, as do all the others.)
What is context – attributes of an individual that define situation:
Context is supplied by applications and sensors. Smart Objects use contextual information to decide how to customize services (example – a digital picture frame that decides what to show based on who is near).
Their context aware platform provides a point for integrating context information from multiple sources and brokering context between devices and applications. The platform has a context cache to remember short term information that might be useful, and a history database that maintains old context data for use in trying to learn user behavior and predict aspects of the users context from limited sensor information (e.g. when the user leaves the office in the afternoon he or she will probably next be “at home”)
He went through the interfaces to the broker and basic functions. A big issue is how to represent context. He proposed “ContextML”, an XML based language to represent elements of context. Context data is segmented in to scopes, associated with who provides it. They didn’t try to do any comprehensive ontology of context information.
Claudio went quickly through some of the W3C and other standardization efforts related to this area.
Question (me) – could you use this technology to hold ICIN
virtually? Kris felt no, Mathais
echoed that – technology won’t eliminate the need to meet in person. Tim Stevens pointed to the poor quality of
audio conferencing and how it improves after you meet face to face. Claudio felt that the you wont
replace the social interactions. (Comment – in fact I wrote a paper 10 years
ago proposing a way to hold a virtual conference by using technology to assist
users in having the kinds of interactions they get from attending a conference
in person. (see http://home.comcast.net/~wamontgomery/communications/conference.doc) It’s
not perfect, but it would be far better than the one-to-many structure so
commonly used for “Webinars” and teleconference meetings. With today’s technology you could do better.
I do think this is a service that would be used, but I suspect there’s little
motivation for it because users like travelling to attend a conference in
person and conference organizers know they won’t get a significant fee from
some attending an internet mediated conference.)
Dave Ludlam came back talking about the keynote given by video and that with a bit more technical help that connection could have been two way allowing the head of standards to participate in the discussion on why next generation internet efforts in different countries are not being coordinated.
Another comment (not sure who) – the average age in the room is at least 45. None of us grew up with this technology. His children have international friends that they consider close that they have never met face to face. The point here is that the use of technology to communicate socially may be a better match to the next generation.
Tim Stevens talked about their experience in testing children in their interactive game environment. Children get the communication much more naturally
This session was devoted to cloud computing in various forms – offering infrastructure or services virtually over a network.
This work I believe is part of a multi-country European research project.
He started talking about the importance of performance in
determining how acceptable services are to the user, with some examples of
Google and Amazon. He cited a study
showing that a slight deviation of performance in Google search would drive
customers to alternative providers. He
had a histogram of Amazon response performance showing mostly acceptable
performance with a few delay spikes caused by failures of machines. Looking from Europe and using servers in the
He showed a set of curves showing performance and cost as resources were added in a network environment demonstrating there’s a “bathtub” shaped optimum for cost, where it performs well enough to keep customers while keeping cost under control.
He went through various models for cloud computing. There are a lot of APIs being published and a lot of models for how the application requesting service specifies its needs and asks to scale performance. He showed an exponential growth in APIs with several possible futures:
He went through a scheme to describe the needs and annote the description of the application needs with Quality of Service, then how it would actually be mapped into assignment to particular resources in the network which would meet the requirements. The network mapping included the possibility of using private facilities in addition to the public internet.
He went through the process of distributed resource management that controls the infrastructure to meet the quality of service needs
(Comment – it might
have been the short night last night, but like many of the other cloud
computing presentations I have seen I find this a bit too abstracted to get a
firm grasp of how I might actually use this if I had a large application to
deploy.)
He presented a reference architecture for platform as a service – the ability to deliver a virtual machine from the cloud as a host to an application (Comment – again, personally I find this very tough going trying to translate the abstract descriptions into what concrete pieces of an application they represent.)
He went through a lot of information on structuring the application, then gave as an example a system that would register children for school. This application has been rolled out by DT to a few municipalities. The real benefit to the cities (or other end users) is the ability to rent everything on an as needed. For this to work, the application and processes must be similar enough to others that “renting” the infrastructure makes economic sense and doesn’t require unique capabilities that will not be used by others.
Designing an application this way can require substantial effort because of the need to thoroughly understand the application. He stated the potential to re-use a lot of this for similar processes (e.g. college registration vs kindergarten registration.)
(Comment, as the spouse of an academic administrator working at an institution that recently adopted a “standard” platform for student administration I have heard about many problems with this model. The trouble is that while the overall processes of managing student life are similar, there are enough differences in the specific policies of any individual institution (e.g. when can an instructor change a grade) that it is very difficult to design a generic solution flexible enough to be adaptable to all the specific variations, and failure to adapt creates major challenges for all involved.)
His conclusion was that offering the platform for applications like this is a major business opportunity for carriers who have the expertise in networks and in managing large systems.
Question: Why do you think this is an opportunity for carriers specifically. (His question I think was more “what makes you think that DT can understand the business of registering children for kindergarten) Answer – because reliability and performance are key needs that aren’t met by the public internet, and DT has the resources to provide them and the expertise to do it right. Followups asked whether there was some other way of doing this such that there would be strong competition for it.
In the process of addressing the question there was a lot of
discussion of targeting a particular size of business – large enough to have
complex needs but not large enough to manage the whole job of implementing it
themselves. (Comment – about this time alarm bells go off in my head. Roughly 30 years ago the old AT&T was
trying to find a way in its unregulated business unit to enter the computing
and data communication business and put together what was essentially a “cloud
computing” offering – applications that could be rented by small businesses
with big needs and without the ability to do it themselves. There were a lot of problems with the way the
AT&T effort was managed, and in the end it failed. I was told in a post-mortem review that one
of the root causes of the failure was that to have an infrastructure powerful
enough to meet a wide variety of needs it was expensive, with the result that
the fees for the users were substantial, and this essentially squeezed out the
market – any organization big enough to pay for the “cloud” solution was big
enough to buy dedicated resources and manage it themselves more cost effectively. We may have learned enough in 30 years not to
make all the same mistakes, but I am nervous that so many people discussing “X
as a service” offerings have only a fuzzy notion of who would be a good
customer for that service rather than a more precise notion of what the market
is that would give good boundaries on acceptable cost and performance to cause
the market to actually buy it.)
This work has been jointly with other companies.
Their architecture has 3 levels:
The components of the solution include an access point where the user access the user (one point for all with the API exposed. The storage provider manages the actual storage. There are also managers for content, system nodes, and users. The last 3 functions are in the data centers.
How does Clostera compare to other solutions?
This is a prototype that now operates only on a friendly user basis. Some of the extensions proposed include the ability to traverse NATs, handling sharing, and being able to support access while offline. (Comment – the current prototype is clearly limited, I’m not sure how to reconcile this with some of the earlier statements about the scope.)
During the question period interoperability was a common question and while the speaker talked about being able to interwork with other network storage architectures, one thing that became clear was that as we design “cloud” based services we run the risk of winding up with differing solutions that aren’t easily mapped, and creating a fragmented market, much as what happened in the early days of IN (e.g. solutions for Lucent IN couldn’t immediately be ported to Telcordia, Nortel, Alcatel, or other vendor equipment without substantial effort.)
The presentation covered the cloud as an implementation as
well as the specifics of an application supporting healthcare. In rural
The system used simple mobile technologies (SMS, GPRS) to collect data from patients, manage it in a cloud based computing information, and deliver it to doctors (via mobile phone), then take the result from the doctor. It’s based on the Amazon could offering (Linux based).
He showed a promotional video on the end service, in which a
woman recorded symptoms and was put in contact with a doctor. (Comment
– It’s a noble goal, but I wonder whether the technology here is too
lightweight to give the doctor enough information for a good diagnosis.
Question: Is this really a cloud application or just a network application. Answer – not in these examples, but he can view cases where analyzing the test results requires massive computing and as a result benefits from the availability of on demand resources via the cloud.
There is a lot of controversy over targeted advertising (i.e. is it invasive of privacy, is it effective, etc.) Nevertheless it is expected to grow over the next few years.
From the customer perspective viewing advertising there is concern about SPAM, the lack of the ability to opt out of getting it and privacy issues. From the network provider perspective it might be a waste of network resources. Targeted advertising has the potential to improve the situation because it increases the effectiveness for the user but also the acceptability to the customer who sees advertising and promotions for things he or she actually wants.
They did a survey of customers on how they felt about some aspects of targeted mobile advertising and found that there was very high resistance to advertising over the mobile device but with incentives, like coupons and a way of turning it off (opt out), it would be acceptable to about half the audience. The asked about channels people would accept ads on, and SMS and email were far more acceptable than IVR or voice messages.
One of the things they found was that users were much more receptive to getting ads on a “pull” basis – where they might get notified that some offer was available but it would only be delivered when they ask for it. 3 trigger models:
(Comment – the user acceptance still seems critical and very fragile. I can imagine a lot of potential abuse, like giving you an ad for a competitor when you call a company and don’t get through)
He showed an architecture which included deep packet inspection to identify what the user was doing (i.e. gather information from searches), and take other events from the network to go to a recommendation engine which would produce a recommendation for an ad and a delivery channel for the user. That could be fed back into the packet processing layer to allow ads to be inserted (piggyback) on delivered information.
Targeted content based advertising is a new field, not as mature as on-line information, but has the potential for much better targeting because the provider knows much more about the customer than is the case for the typical on-line advertiser.
Question (Chet McQuaide) – Amazon does a lot of targeted advertising now, how does what you are talking about compare with what they do? Answer – the speaker sees new opportunities related to content services.
Question – what kinds of channels does this work with and is
the ad really integrated with the content.
Answer – it could use multiple channels (TV,
Telestration is the animation done by commentators for sporting events to overlay video and highlight things to draw attention to particular pieces of the image. (It also is done for weather, news, and other issues. Today it uses dedicated equipment for drawing and video mixing that must be physically close to the user and as a result does not support a distributed set of users drawing on the same video.
They feel there may be a big market for being able to provide telestration over the network to a distributed community. They took as their requirements:
Their implementation composes the diagram in a server, then mixes that into the video at the client. This allows them to ignore differences in screen format and resolution (or more accurately push handling all that into the client.)
Applications: Telemedicine, remote collaboration, education, first aid response, gaming, social networking.
They have a lightweight implementation of drawing that could be done strictly in a browser (low overhead). The service does need two way communication for telestration, one way video delivery, and the support of overlay layering in the end device (He said that most PCs and set top boxes will support overlay layering sufficient to support the service, but didn’t talk about mobile devices). The service also needs the ability to synchronize video streams to different devices in order to allow the telestrated drawings to vary over time and refer to specific parts of the video consistently in all terminals.
He showed a couple of examples, one of a group of friends watching snowboarding in which one of them was illustrating what was going on. In another a group of half a dozen doctors were watching an operation from remote locations, and being able to see the operation. Any of them can annotate the screen to show things to all including the surgeon.
He talked a little about extensions to the work that will be capable of tracking objects as they move in the video and attaching telestration to objects.
Question: Does this tie in to the synchronization presentation by Hans Stokking yesterday? (Yes, synchronization is needed). How do you resolve the problem that any delay in the channel causes twice that delay in others seeing the result of your telestration? (I didn’t complete understand the answer. It seems like they have a way of identifying when to apply the telestration updates which they use in the terminal, but not full synchronization of the underlying video.)
Another question related to object tracking, and whether it
would be useful in targeting advertising. The answer is potentially yes.
The purpose of doing this is to help customers deal with the
problem of too many choices. (Comment – during one of our lunches there
was a discussion of the difference in meal formats here to
In an online world, users have gotten used to making their own choices by doing searches, looking at recommendations from others attached to lists of choices and making their own selections. On the TV there are many problems:
There are different ways of doing recommendations:
Her work was more on the user interface to present the recommendations than the underlying technology or the scheme to generate them. Their system did not actually present video – it was a disconnected prototype that focused on judging response to how items were presented, not what was presented or anything about the user actually watching the recommended video.
They have a trial of this in progress. They had 2 user interface designers come up with designs for different ways of delivering the recommendations and ways for the user to select and navigate. There was limited information available on how to use the system to simulate what was likely in the real world.
The had four designs:
· A full program guide
· blocks of times with recommendations on top,
· list of things down the right side, a widget overlay,
· an odd “wheel” design (read the paper, it’s impossible to describe apparently they put it there just to judge response to an unusual design)
This was a qualitative trail where they got a lot of responses both individually and in focus groups. They had 17 people in 5 groups, including both BT people and non-company people. There was a variety of experience with program guides and recommendation systems.
They gave people two scenarios, one when you just turned the TV on, and one when you finished a program and wanted something else. They used a “Grounded Theory” process to analyze this which produces a good concise summary. Statements are recorded, coded in categories, clustered, The summary presented showed the groups in type font size related to importance (number of comments).
Conclusions:
She presented their proposed interface which gave the recommendations down the left, the most important on top, and the program guide filling up the rest of the display.
Some more things: Recommendations to multi-person households is a challenge. Social recommendations are needed, TV is relaxation, and “don’t make it too complicated”.
Question – How does this work with a mobile phone device (small screen)? They are interested in understanding this. Maybe look at it as a second screen to help localize who is selecting content for the big screen.
Question – was the interface interactive? No, just Powerpoint for now. That’s a next step.
Question (Chet McQuaide). Observation – his IPTV offers a “Favorites” category, but it’s based on channels, not subjects and that’s wrong. He refined the question about using a personal device as a remote, which is what they are looking at
Question (me) – Did the users think there was a cost to watch a selection and does that matter? Users insisted that if they get a recommendation it’s based on their views, not what the operator has to push, and the system has to make clear what the costs are, then the users can handle it.
Dan is a past chair of ICIN. He introduced this session as where the application weaving and transformational challenge is really addressed.
He began with a description of needs for controlling and monitoring Telco services, specifically lots of dynamically created and synchronized tasks, long running transactions, and handling of asynchronous events. He showed a reference model that consisted of a composite services having multiple component services (which communicate asynchronously), each of which deals with the devices and users. He then went through various patterns that apply (Comment – the use of design patterns was a major thread of computer software architecture and design maybe 15 years ago and still has strong support. It basically identifies patterns for how pieces of a design interact that can be reused.)
He talked about client/server, publish/subscribe, and a couple of other basic patterns, then some more complex patterns including:
He showed examples of these patterns in four different interfaces (Parlay, Parlay X, Ribbit, and one other).He showed a specific example of Parlay and Ribbit showing the differences in how the same pattern is implemented.
Patterns provide a language independent way of specifying repeatable ways in which components interact to accomplish something. They are language independent, but are often mapped into BPEL or Java for implementation.
He went through a discussion of a parental control service and described the underlying patterns for it’s implementation, then showed the mapping of that description into BPEL. The result apparently doesn’t quite work. (The reasons are fairly technical and I don’t think the 15 minute presentation was enough to really explain what was going on.)
Some lessons:
He says in Austria Mobile internet is cheap, and users often use it causally (e.g. killing time while waiting for the train.) Today, there is a lot of bandwidth at the radio interfaces, but there is a lot of inflexibility in the access structure and as it gets into the public internet. The concern is that as we get “hypergiant” content providers like YouTube they set up lots of connection points to insure that they are physically close to the customer requesting a download. Unfortunately, the mobile network tends to pick limited routes and fix them for the duration of the connection of the endpoint, which means they do not find the best route to these peering points.
He talked about various ways of handling this, including inspecting packet flows to determine best routing and changing the routing based on that. This works, but there are no guarantees because the internet doesn’t have any QoS guarantees. The internet continues to work primarily due to overprovisioning (more capacity than needed).
Another idea he presented was to take advantage of the nested characteristic of video coders (including H.264). In this view the core, which has to be delivered is sent in one flow with high QoS and low delay, while “enhancement” information is set in separate “best effort” flows. (Comment – déjà vu. In the early 1980s I was involved in the design of a packet network for voice using ADLPC coding which had the same property, and we handled overload by grouping the bits in the voice samples so all the most significant bits were in packets with the highest priority, while lower order bits were sent with lower priority and got dropped in overload. The result was smooth degredation, with the ability to handle substantial voice overloads without losing intelligibility of the voice.)
He talked about federated architectures in which providers interconnect networks with routers with different characteristics, and basically trade resources to handle whatever the application needs (i.e. if it’s low delay maybe it routes through a partner network that has focused on that while your network takes their bursty delay tolerant traffic for them.
He talked about other techniques to cooperatively manage the mobile internet that could improve performance to the end user.
They did a prototype implementation of rerouting a video flow on the fly. One conclusion was that a lot of the overload was not related to communication but to the implementation of the virtual machine on which the algorithms ran.
The presentation was made bi Irina Boldea. Users want to be able to access their services any time, from anywhere, and from any device. The challenge is doing this in a world where the control of the telecommunication companies is decreasing over time because “over the top” players like internet providers have slowly captured much of the responsibility for providing services.
For example, Google now provides voice, mail, mailboxes, messaging, etc.
In their view the key elements to providing a converged service are converged messaging and context. She showed an interworking of services to devices that basically allowed a service designed for one endpoint to be delivered elsewhere. She talked a bit about standardization of converged messaging and the fact that it’s a very complex service.
Context was presented like a profile – lots of characteristics of the user including some attributes that are policies that control how the user wants to receive services. There were also things that were implied about the user (like buying interests) rather than being set explicitly.
A 3rd element needed is “Personalized messaging”, which makes use of the other two.
She showed how this could be done using the policy deployment and enforcement mechanism of IMS to describe personalizations as policies and have them carried out by the converged messaging service as needed. She gave a detailed example (not really readable at the scale of the presentation).
All of this was prototyped on the FOKUS Open SOA Telco playground, which provides models of the IMS components. The implemented the message personalization and converged messaging pieces according to OMA standards or pre-standards. One of the next steps will be to extend to LTE using the OpenEPC toolkit that they have.
Roberto is the technical chair for the next ICIN and has
been a long time contributor to ICIN.
This is joint work with a university in
He stated a key problem as being how to virtualize a network while still maintaining quality of service guarantees (what one of the earlier talks showed). The future is totally distributed and maybe programmable.
In his view the internet is dramatically changing – it’s becoming asymmetric. Large companies like Google, Microsoft, Amazon, etc essentially represent the internet to the user (i.e. if you aren’t on Google you don’t exist). This is a big departure from the open days of the past where anyone who hooks up becomes instantly visible to all.
Cloud Computing: The trouble is that each cloud is a private universe – you can’t mix and match components from multiples. Your choice of a cloud locks you in.
Load is driving companies to identify the few customers consuming lots of bandwidth and throttle them back a bit. This violates the myth that the internet is totally transparent.
Most telcos have decided they don’t like peer to peer networking, but in fact they should embrace it – it’s scalable and robust, at the expense of more explicitly dealing with the challenges of distribution.
His problem statement – rethink some aspects of network architecture:
Enabling technologies:
He talked about moving from a fully Autonomous network to cooperating networks – there will be multiple networks, they must cooperate.
A Network of Networks -- networks cooperate to deliver services.
He proposed a structure from peer-to-peer that relies on “special” nodes in the distributed configuration to be responsible for understanding the capabilities of the rest enable resource sharing. These become the inner circle that are responsible for making the network work. He presented a complete architecture around this concept that achieves high availability with unreliable nodes, and said that this is a structure which can be built in a “Network of Networks” This is a way in which the Telco community could take the lead in defining the next generation.
(Comment – Roberto
covered a huge number of concepts that were complex and unfamiliar to many if
not most in a short period of time. I
feel I got a bit lost in the details.)
The panel was all asked to comment on how they would define policies. Roberto talked about how the “grid computing” world was more complex than client server. We have languages to define policies and programming, but there’s more work to be done. Irina talked about about some specifics they used in their prototype. It’s an XML language focused on flexibility, not necessarily performance.
Question (Ericsson person) Roberto suggested that one operator could provide a virtual IMS in another countries. He suggested that this falls within the IMS standard and effectively means that the serving company is essentially supplying the IMS software on equipment in the served country (i.e. you are a virtual network operator) Is this true? I think he agreed, more or less. He cited another example of Apple as a VNO, using multiple carrier networks to serve their clients.
Question: Max Michel – Nice topics for next year’s conference, but he wants to know how we get people to pay for the future. Current model of the internet will not scale up, but adding all the complexity of telephony with tariffs and exchanges on everything will stifle innovation. (I didn’t understand the answer Roberto gave and how it related). One of the other speakers suggested a market in communications technologies and futures as perhaps the foundation of how you put networks together.
ICIN has had a “poster” session for several years. It has in the past been used to accommodate presentations that didn’t quite make the grade for live presentation but seemed interesting or came from interesting sources (i.e. small companies and universities the conference wasn’t otherwise getting input from). This year we did something different, partly based on feedback I gave after last year’s conference. We made an explicit call for submissions to this session with the enticement that we would allow presenters to give short demonstrations. We also allowed the poster and demonstration presenters to submit full papers and having them included in the proceedings of the conference. We got many good submissions, about half of which were demonstrations. As the chair of the session I can tell you it takes work logistically to support this given the number of presentations and their special requirements. Fortunately I had excellent assistance in doing so.
Also new for this year was some dedicated time for viewing the presentations at an evening reception sponsored by Nokia/Siemens Networks. That proved very successful as it gave participants an extended period to interact with the presenters.
This presentation really focused on a mechanism for efficiently building services to allow service developers to build services aimed at niche applications. A service broker from Fraunhofer FOKUS that allows services to be built from a wide variety of service elements (e.g. popular internet services in addition to session control, and exposed to applications through a variety of APIs following SOA and Web2.0 principles. The broker is designed to be “industrial strength” – high performance and scaleable.
This poster presentation covered various factors in what determines needs for service creation specifically in carrier networks and how to apply Web2.0 technologies to help update the service creation environment to meet these needs.
This was an interesting and somewhat unexpected presentation, showing the problems an operator faces in specifying and integrating new terminal types with a lot of services. Specifically protocols and clients in the terminal are often incompatible and adaptations need to be made. What the authors propose is using various capabilities for download to essentially download a compatible client onto a new terminal to allow it to be rapidly introduced.
The Evolved Packet Core is a new set of service mechanisms to work in an all IP communications world on top of IMS, the public internet, or other access and transport networks convertible to IP. It provides capabilities for QoS and multimedia communication to support a variety of applications, but also has some new interfaces the applications must use to get those capabilities. What this presentation and simple demonstration illustrated was a simulation toolkit that would allow new applications to be built and integrated.
This poster and demonstration was aimed at allowing real end users to create their own services by composing service elements. They provide a graphical environment to build services with simple elements and allow the user to put those elements together, test them, and deploy the service. Their current prototype works with an Android smartphone to provide the service (i.e. the logic is downloaded into the phone which provides call handling), but the goal is to also be able to deliver the same logic to network based call control elements.
This presentation and demonstration showed how to take services from a variety of sources and combine them with a simple graphical environment built in Java. It could then deliver services over a variety of different access media (internet, IMS, circuit, etc.) They had a multimedia presentation illustrating the application and a short demonstration using SIP and a simulated IMS to show a service that would handle incoming calls noticing the called parties Google calendar, and if the person wasn’t available make suitable notations for later followup.
This poster was focused on a service that they are exploring to use technology to provide ways to enhance family and extended family communication. The service provides a family home for communication that allows secure access from anywhere and allows family members to share media, leave messages, and communicate often.
Max started with a view of what we thought in 2009. ICIN community was mainly interested in architecture and control and the impact of the outside on the network. There was a lot of interest in alternative networks (grids, sensors, etc.)
ICIN is unique in using peer reviewed papers. (Comment
– there was some discussion of this in the TPC meetings. I don’t know of any other professional forum
where every paper is reviewed by 10-15 experts in the field before
publication. The review process is as
much about stimulation of interests as well as review.)
Some themes:
We now had a discussion on the paper on de’perimeterization, which grew out of the attempt to summarize ICIN 2009 by a sub committee of the ICIN TPC (Comment – I had only a minor role in this one)
Roberto – Network operator services are now strongly coupled
to networks. That limits markets -- TI has
Roberto also talked about the speed of standardization – way too slow to meet this kind of need. One problem is we think not about services but about infrastructure and cost effectiveness of infrastructure. Strong statement: Don’t deploy next generation infrastructure until we have a good idea of what services we want to deliver and know we can do it in an open way.
Telco deploys Standard, but closed systems (we follow standards, but you can’t talk to our internals.)
Google, Facebook, etc., deploy non-standard but totally open systems. Each has an API different from anyone else, but everyone is free to access it.
Comment: (Anders Lundquist) Strong statement. He talked about the app store problem. Once the application goes to someone else who isn’t tied to any particular operator, then the user can take his business anywhere that is cheap.
Stuart: RCS (Rich Communication Services) when it’s done will simply duplicate what Skype could do 3 years ago. We no longer lead in the area of services. He felt the segmentation by region was really a problem of regulation, which has shaped the way we compete or don’t compete.
Comment (ALU person) One thing that goes wrong is we assume we have to build the services ourselves. eBay has 8 billion API calls in the last month – most of their revenue (assuming these calls generate revenue) comes from other people using their capabilities, not their own services. (She advocated opening APIs and charging for them or establishing other compensation.)
Roberto – his company recently rolled out a strategy: “We need to create a lot of new services”. This was driven by the perception that they were going to lose voice revenue and to make it up in services would be challenging. He shares the view that you need ecosystems working on your platforms to build all those services. That’s a big attitude shift for most.
Chet McQuaide: he observed that part of de’perimeeterization involves sharing context. Context from selecting videos would be useful to amazon (or netflicks?) while we would be interested in the profiles of those companies. Everything is shareable. (Comment – well maybe not. There are both legal issues and customer resistance issues in sharing profile information)
Comment – (Danish TV person) – we keep mixing revenues from regional access with other things. Access is still a good business and it is regional. Services are more competitive and that’s where the threats are. (Comment – interesting, I wonder if the division in TV is a bit firmer than in telephony)
Roberto: Competitors are doing a good job, they are satisfying customers. We should look at some things as lost and get into new things where we can compete. OneAPI is too late – waste of money. We should be thinking about machine-to-machine API, augmented reality, and other areas where we can ad value.
T-Labs person: His view is biggest competitor of operator A is operator B, not Google, Facebook, etc. If operator A holds back on introducing LTE, operator B can do it and get all the business.
(Comment – The real problem here in my view is that there are multiple businesses here: Access and services. Access has competition on price, not services, and very few players because investment required is large. The service business has been deperimeterized and is very competitive because the cost of entry is near zero. Operators need to become better bit pipe providers and recognize that services are something it will be hard to compete in, but they don’t want to be in that future.)
Rebecca Copeland: In
some parts of
Max’s take was basically similar – it’s a business problem, not a technology problem that causes us to make a big deal out of roaming.
Session summaries:
The networking reception Monday Evening was interesting, with short remarks by the Senator for Economics, Technology and Women's Issues and Mayor of Berlin (a Mayor of Berlin but not the Mayor of Berlin) (Comment – I’m not sure I got that exactly right, his title was given variously at various times and I admit I don’t understand the government structure here) The key message is that Berlin has been working hard to establish itself as the technology center for northern Europe, with some success in attracting several large R&D companies. There were about 50 people from the local Internet development community at the reception, though as an observation I think there wasn’t as much mixing between those folks and the ICIN attendees as had been hoped for. I think that’s part of the challenge for communication carriers – our culture of how we socialize, where we go to conferences, and what expect to do there isn’t entirely compatible with the opinion leaders in new computing and communication applications.
During Lunch on Tuesday with Dave Ludlam, John OConnel, and an attendee from Telenor we had a discussion on how ICIN should evolve to stay relevant. Some interesting observations:
The attendee from
We talked a bit about VON as an older competitor to ICIN and
how they got people. One thing was fear
and focus. Everyone wanted to know what
was coming with VOIP, and Pulver successfully
associated VON firmly with “the place to be if you wanted to learn about VoIP).
(Comment
– I think at the time this was easier, he was the leader, and the focus was
initially all in the
In other discussions with Rebecca Copeland a couple of interesting things came out:
She made the observation that it is both cheaper and more convenient
for her to use Vodafone
voice roaming on her iPhone than to use
Skype, because prices for roaming mobile data are expensive. The convenience factor is access to her
normal address book and the fact that her normal phone number works without
effort. She also noted the problem of
compatibility with internet providers.
Google and Skype don’t interwork (She said Sprint can connect them, but
it takes effort every time one of their proprietary formats changes because
it’s done by packet inspection and hacking).
She felt that addressing this would require standards on the order of IMS Roberto – 30% of facebook users in
In another conversation on vendor/customer relationships,
Rebecca indicated that working with Huawei has
carried an interesting challenge.
Chinese culture is apparently not to say no, so it is difficult to get a
definitive answer on what a product cannot do or what interface won’t be
developed. We had a lot of discussion on
this and realize it’s not always the case, but I hadn’t though before about the
impact of culture on technical/business negotiations (Comment – in my view the difficulty in
saying no is a significant problem in developing feature rich software. In the mid 1990s when I moved to Lucent’s IN
business unit one of the first meetings I remember was a management meeting
with our new director. One of the long
time managers overviewed recent results and proudly described how much revenue
recent product sales to our best customer had brought in. The manager asked one question: “Does anyone know how much those sales cost
and whether we made a profit on them?”
The answer of course was nobody did.
It pointed to a persistent problem in the IN business. The platforms built by Lucent and others were
supposed to be true standard products with multiple customers for the same
elements, but our customers kept demanding unique features to support their
unique applications, and both our developers and sales teams kept saying
“yes”. The result was instead we had a
custom development business in which each product was customized for each
customer, and our business was not set up to support this (too much overhead
managing each unique combination)
This was held in conjunction with ICIN, but is actually an
independent conference which last year was held in
Pieter is the organizer of this meeting and in his overview went through a lot of basic characteristics of platforms and how they appeared in the mobile internet world as an introduction to the rest of the program.
What Roberto basically presented was that while usage of our networks is rising rapidly, revenue is at best stable (and most often declining.) This is not a sustainable solution, but it is in fact the success of mobile internet that is responsible for the stress on those who have been most successful in providing it.)
Over time the operators have reacted by trying to cut costs to survive. Some create new services, but they haven’t brought in enough revenue to offset rising costs of providing infrastructure. Regulators eventually have to make a decision – how to keep the operators alive in this environment. Do you want the operators to be simple bit pipe providers or allow them to be something else?
To make this concrete he stated the problem of a 40 Billion Euro service provider. If revenue drops only 1% every year, that’s a 400 million Euro gap to make up:
Someone pointed out that margin was important here. Indeed the new markets often have lower margin, which would mean to preserve your profit you have to grow revenue even more.
In entering new markets you may need to adjust business model (specifically he was looking at being a broker, with APIs empowering 3rd parties for the applications.)
Roberto’s slides had a whole series of business models, from “utility” at the base to “broker” at the top ranked by margin among other characteristics. The clear message in these charts was that the operators to move towards the “richer” end of that spectrum where margins are larger.
Telcos face two problems in competing with “Webcos”.
Can Telcos enter new markets with a traditional ecosystem approach as has been used in the past to get applications?
What can Telcos bring to new markets?
Is the network an asset?
He then went through a lot about APIs and what could be done with them.
The bottom line from his talk was “Pick your role and figure out where you want to play.”
Question: 20 years ago there was supposed to be consolidation. Now we have more operators than ever. Why? Answer – there has been consolidation but it’s invisible to the customer. Telefonica owns most of Telecom Italia. (The real message here is that yes, carriers have consolidated and that can help margins, but as they have consolidated they have retained traditional brand identity. A second take away was that regulators have not encouraged consolidation to the point where operators gain the power to raise prices (Comment – though they probably would encourage consolidation that allows operators to achieve economies of scale and lower costs).
(Comment – Roberto
gave this talk with a subset of a large file of slides, and afterwards says he
often speaks this way when asked to present on these kinds of topics. There were many interesting charts he flipped
through to find the ones he actually showed.
I believe he has a lot off insight in how the communication business has
evolved and the forces behind that evolution.)
Rebecca started by saying that she feels we are in the middle of a disruption introduced by the iPhone.
Different players with different motivation in App Stores:
It’s not about the revenue from the application. Apps are disposable – small, cheap, the price of a cup of coffee. You use them and throw them away. If you have to get a new one with a new phone so what? They are cheap. (Comment – this is in stark contrast to PC applications where the cost of software is very significant and as a result becomes a factor in countering the desire to buy a new PC if that means having to pay all over again for Office and other expensive software. It is also a real contrast from the traditional telco model which is about long term subscriptions and reducing “churn”. To exploit this, the merchant has to be agile, capable of selling a lot of applications at low overhead. Apple’s experience with iTunes is in fact a perfect training for this.)
Apps are changing user attitudes. Things they try for apps become things they now require (e.g. multimedia, location, etc.). App Stores introduce customers to micropayments. (Comment – yes, I believe that actually a big part of Apples success in particular is that many of the customers for iPhone already had accounts on iTunes and had experience with it. I would actually cite another example. I believe electronic toll payments made a significant change in my behavior and probably many others. When tolls were paid in cash there was a decision to be made on each trip as to whether to spend the money on the toll road or take the slower “free” road. Taking the toll road required having and spending coins – very visible and inconvenient. With an electronic toll account the payment is automatic and painless and you much more easily perceive that the cost of using the road is small compared to the time it saves.)
The conclusion of her material was really that whole concept of downloading applications for a low price is a radically different model than the one we are used to and will as a result cause significant disruptions to how we acquire and use communication services.