This is the 12th
ICIN conference, and nearly the 20th anniversary of the first. The conference has been held at various
intervals, from 2 years down to 1 in recent years, and has had as many as 500
attendees in the Boom years and as few as about 150 in recent years. It is notable for always attracting the key
people in carriers and vendors who are concerned with services and service
architecture. Europe, Korea, and Japan
area always well represented, while the rest of the world varies. In recent years attendance from the US has
been relatively sparse, especially from carriers. The cost of travel combined with the low
value of the dollar is clearly a factor.
None the less the attendees represent about 70 different organizations
in 25 or 30 different countries.
This year’s conference was
held at the Pullman/Sofitel hotel in Bordeaux Lac, a
conference center outside of the city. The
hotel’s facilities were first rate including conference rooms and meals, though
the location is less than ideal because many attendees want to visit
restaurants and tourist sites in the city, which requires either public
transportation (a bus or a 30 minute walk to a 20 minute tram ride) or an
expensive taxi ride. One notable thing
about the facilities was that each of the conference rooms provided power
outlets at each seat, and the conference provided a free WiFi
network for attendees. The Wifi worked only intermittently (likely too many people
trying to use access points not up to handling the load), but the availability
of power everywhere made it much easier to take comprehensive notes.
The fundamental theme of ICIN
has always been the creation and deployment of innovative services using
intelligence in communication networks and producing revenues for
telecommunications operators. In the
beginning of the conference the focus was on Intelligent Network, but for the
past 10 years the focus has shifted towards ways to take advantage of creative
energy behind the huge wave of new services being built for the Internet. Given the effort put into IMS, Parlay, Parlay
X, and other standards and deployments for opening networks to innovators, it
is disappointing to see that success at these efforts is still very small in
scale compared to what is being seen in the internet/web world. With the exception of content downloading and
management 3rd party participation in telecommunications services
remains small. In considering the
material and discussions in the conference I make several observations on the
possible reasons and implications:
Stuart Sharrock
gave some statistical information on the conference to open the first
session. ICIN this year saw a slight upturn in
attendance from a year ago.
Stuart – We are approaching
the Zetabyte era – he showed statistics for growth of
IP traffic and projects of 44 “Exabytes” a month
(from 10K Petabytes) per month The
big new driver is IPTV and IP video.
Mobile traffic is about 1% of this, but it is the most rapid growth rate
percentage wise (100 times in 4-5 years).
Second point is that as we
move from voice domination to data domination the revenues are rising only slowly – flat rate plans, competition. This means that the network cost per bit sent
must fall dramatically just to keep carrier’s profitable (nothing really new
here.)
Following this general
information there were keynote addresses from three industry leaders.
His organization is an
architecture/integration laboratory for NTT for next generation equipment. NTT reorganized in 1999 for NGN into 3
communication companies (east, west, long distance) plus Data and Docomo. The
Laboratory is part of the overall holding company (Comment – sounds like the old Bell Labs)
Broadband (ADSL and 3G)
started in 2000 and 2001 for NTT. NTT
actually has only 1/3 of the ADSL market, because NTT is required to lease its
access lines at a low price to other operators.
(The biggest share, slightly more than NTT is SoftBank
Broadband). This led NTT to deploy Fiber
to the Home (PON based). It became cost
competitive with ADSL in 2004). PON
allows the cost of distribution network to be shared among multiple customers
so that the economics depend on the take rate for the service. NTT deployed primary line voice service using
VOIP over FTTH (Comment – the hard issue
here tends to be power. Not sure how
they addressed it).
Broadband subscribers in
Japan are 13M DSL, 4M CATV, and 11.3M FTTH.
FTTH will overtake DSL next year (DSL is decreasing). NTT has over 70% of the FTTH market. (Comment
– that’s a pretty good penetration rate.
Not sure how many households there are in Japan, but the broadband
penetration, especially Fiber, looks much higher than the US)
NGN for them includes IMS and
managed communication. They are building
their NGN in the transit network, starting with commercial service in March
2008. The coverage area for the NGN will
be expanded until it covers their whole service area by March of 2011. Over the 2 years following that they will be
expanding the coverage area of NGN and migrating 20 million fiber optic
(probably a projection) subscribers to the NGN
Current revenue is 50% legacy
voice, and roughly 25% IP and 25% new business.
The expectation is 35% new business, 40% IP, and 25% legacy by 2013.
They have aggressively set
prices for FTTH, and currently costs exceed revenues. The projected break even is 20 million FTTH
subscribers, which they expect in 2011.
They are not on that projection based on past growth rate, (The gap
seems about 2-3 Million). Therefore they
need new services to draw more customers.
First service was just
transmission of TV over FTTH, now new service, Hikari
(Light) TV includes broadcast and content.)
A key cost requirement for them is support of Multicast, which they
achieve because of the use of a managed network for distribution.
They believe a common SDP
architecture is key to success, and the SDP supports
both NGN and 3rd party services.
They architect for two kinds of platforms, internal in the network and
outside via an API (Parlay X). The
internal SDP can export services to the external SDP. His examples included unified messaging and Smart
Homes (security and appliance control together with multimedia communication (Comment – nothing really new here, I think
the problem has always been to come up with a compelling reason for the
customer to pay for this kind of capability, since maintaining all the
connectivity inside the home reliably isn’t trivial)
Key issue for them is
migration of POTS to FTTH. POTS peaked
at about 60M in 1999. Now dropping
rapidly to be replaced by FTTH based telephony.
If you look at traffic, there was a sharp peak in 2001 due to use of
dialup IP access, then a very sharp fall off in minutes after that due to
migration to DSL. (Comment – I think that in the
To migrate
POTS to NGN, they introduce an Aggregation media gateway that collects copper
loops and puts them into the NGN, plus a control server to provide POTS services. The control server is deep in the
network.
Question (Deutsch Telekom)
What is the role
of the regulator in making the transition. – They provide open access to other
providers for both FTTH and Metal access networks (just at the very basic
level).
Question (Me) Is the NGN IPV6 from the start? Answer – yes, the core is IPV6, but they do
not currently open IPV6 access to others.
This is controversial.
Question (Aepona)
How is revenue
shared with their partners. Answer –
yes, there is sharing, but the details are not available.
Question (Russian
attendee) How did you address
“Lifeline” service for FTTH. Answer –
currently it’s only on the legacy equipment, they are moving to FTTH/NGN but
there are issues. The key issue for the questioner
is power – EU requires 2 hours power, Russia requires 1. Answer – they aren’t requiring this, their
focus is on feature parity using NGN.
Currently POTS lines that are on the FTTH are routed over IP to their
legacy POTS switches, they are developing the required features in the NGN
directly (feature/control server) to support all required features. The power issue is being aggressively
studied, but what they are hoping is that the government will not require them
to provide power (i.e
you could use local battery backup, but it’s the customer’s responsibility to
make sure it’s big enough and reliable).
Another motivation for them is basically reducing their own power
requirements to comply with “green” initiatives. CO powered copper is a major power consumer
(and probably a significant source of toxic material to be recycled, i.e. CO
battery plant).
He opened with some dramatic
shots of the Sumatra Tsunami – some watch in fascination, others run away. 2008 has brought an explosion of new end user
devices. They can’t be managing them any
more. Also, there has been an explosion
of network sites and content providers.
What’s really new with Web2.0
– APIs become a new product category.
Exposing your core assets via APIs is a new business category. He showed an organization of APIs and sources
from major internet companies – thousands.
Nothing coming from the Telecom companies of any significance. What’s significant about this is that the
millions of independent service builders are picking their APIs and there’s
nothing on the table from the telco’s. Telcos have very
valuable assets – you can build up social networking contents using the
personal directories on mobile phones (Comment
– sure, but who owns the directory in your phone? I see some problems if your service provider
sells your address book without your permission)
Telcos are starting to realize this and some are putting things
out there, but no global standardization.
(Interesting, what about Parlay
X?)
The vision: More revenue by opening up “the long tail” –
3rd party applications serve niche needs. (Comment
– nothing new here) Two types of customers – Developers, who are looking to build
things and application downloaders. Developers are driven by ease of use and
revenues. Lots of APIs offer revenue for
use. Downloaders
are looking for cool stuff. Mostly free,
but sometimes they pay.
What does this mean for Telcos? Selling APIs
is a new business. They have to provide
value, but should have low canibalazation (i.e. not
take revenue away from their services.
They have to be global – very important for developers that things be
global).
In Web1.0, Telcos came in late – invested in partners and services
after they were already developed and expensive. The key for Web2.0 is essentially to get
access to new services very early and get LOTS of them. Every API is essentially an investment in the
companies that use it if you do it right, so companies like Google that open
hundreds of APIs to thousands of application developers are essentially like
early venture capitalists, looking to share in the growth of a few of those
developers into major new applications.
Telefonica has 240 Million subscribers – that’s a lot of value
to users of their APIs.
Trends – more internet and
web convergence. Common
network, Fixed/Mobile convergence, and converged devices.
What’s the business model:
(Comment – interesting data. IPTV is very small compared to total TV, and
while downloads look big they are dwarfed by traditional kinds of
e-commerce. I wonder how much of that
Amazon and eBay traffic is CDs and DVDs and whether that alone exceeds the downloads?)
The key thing is that this is
turbulent – things that were traditionally subscription may become advertising
based (e.g. Google office or Hotmail).
Some things move to subscription (e.g. renting services on the web vs buying software).
Looking towards Communication
2.0 – transitioning to digital media and content, converged smart devices, and
hosting 2.0 (hosted services for consumers and enterprises) – Any Content, Any
Network, Any Screen. (Comment – funny – this
sounds like a motto for a carrier or maybe Sun).
The trends they see in
devices include uniform content management, but also having a consistent suite
of applications. Devices are moving from
browsers to application platforms, like PCs.
(Of course he is looking at
Windows mobile based devices). New
screen category is something called the Netbook –
mobile compact notebook focused on wireless connectivity
He gave some examples of rich
media from sports, including the Olympics – average duration of watching online
video from MSNBCs Olympics offerings was 34 minutes,
10X that for YouTube.
His view of Service delivery
– looks a lot like the telco view in layering, but
much simpler and more focused on services than transport.
Question (Chet McQuaide)
What’s needed to
eliminate the frustration in user experience in realizing a truly converged
lifestyle (Everyone referred to it in some form). Augustin – design
of services is detached from the user.
People invent things they think will help without knowing, especially
when invention is done in the telco. The key is focusing on the user experiences “Fun(ny) Services”
David – Microsoft plays 2
roles, API/platform builder and application builder. On the platform side they have made a much
stronger push to put the things they have done into standards – IT standards
and internet have been different worlds. He cited SIP as a counter example where
people worked together and that Microsoft has been active in pushing content
management standards from IT into telecom and mobile devices (Nokia is a major
licensee for them). Commitment to end
frustration is to make sure the platform works across everything (Comment – nothing new here, Microsoft’s
strategy has always been to have the platform work everywhere) On
the application side, their focus has shifted from PC centric to making sure
things will work on
Kou Mikaye
– Question is how to integrate what you need securely – today it’s usually
inconvenient. His children do everything
on the internet, but it’s risky – security is an important question and people
are uncomfortable with it. He comes from
the Telco world where you know who is on the end of a device with good
certainty, in the Internet you don’t know, so you have to rely on some kind of
3rd party provided authentication.
(Comment – I think he was getting
at the question of identity management and “single sign on”. For me personally one major inhibitor for
making more use of ecommerce and electronic management and payment is that
every company has it’s own security procedures, and many require you to expose
personal information I do not want on my laptop or over the air, like my SSN). Iphone isn’t
selling broadly in Japan (Chet asked each to comment on Iphone). Stuart – Internet is designed for anonymity,
there is no identity fundamentally part of the internet. This is a role the Telcos
can help play.
Question (Philipe Kelly,
ALU) What space is left for traditional professional media
producing content? Orange has moved to
produce exclusive content for their IPTV and Mobile customers not available through
traditional channels. Microsoft – still
a good role for professional content and traditional media, just will be other
things. Advertising based professional
content has a strong business model.
User generated content is mostly in the long tail. (Comment
– somewhere recently I recall seeing an analysis of some recent news events
where Youtube amateur reporting and blogging was getting more information out than traditional
media and getting more viewership. I could well see amateur reporting replacing
a significant part of traditional news and sports.) Telefonica –
plenty of room for both.
Question What’s the solution to the identity problem?
Microsoft – government will provide one, enterprises provide one, and
the Mobile SIM is one. Unfortunately he
believes there will continue to be both 3rd party identity providers
and service providers providing identities.
(Google, Facebook, Microsoft, etc. all provide
federated identities, but they don’t in general federate across each
other.) This is a huge challenge because
it’s not just technology, but agreement on business and legal issues associated
with identities.
Chet joined ICIN as part of
BellSouth, then worked with AT&T. During his work in AT&T he learned
there’s nothing like experience.
This presentation is based on
a pre-IMS triple play (VoiP ISP, IPTV) service. This isn’t the first one they did (did one on
H.323, this is SIP). They are targeting
5M subscribers by end of 2008. Their
deployment uses both ATM and Gigabit Ethernet to connect the customers. They use a dedicated Virtual circuit to carry
VOIP into their core network which uses IP VPN to provide interconnection.
IP Addressing – they use NATS
to implement private IPv4 addressing at the edges, re-using addresses 5
times. The NATS translate to public IP
addressing in the core. The regionalization
of the network makes it easier to manage the IP addresses and makes
transmission paths shorter. It also
simplifies adding new application servers and load distribution and failure
recovery for application servers.
The dynamic nature of the
infrastructure has a lot of issues to solve – they need to know which servers
were involved in delivering service to a user at a particular time to handle QoS issues (customer complaints). Tuning is very important – simple things like
registration rates can mater a lot.
Reducing the frequency of re-registration can save a lot of capacity and
CPU cycles in their servers, but it also slows the detection of problems. Subscribe/Notify based services can have big
impacts – they typically don’t require real-time response and don’t produce
revenue directly, but consume a lot of resources. (Comment
– one of the things Personeta had to do in implementing a presence based
service is figure out how to separate the subscribe/notify traffic from the
session control traffic, since the session control runs on servers with limited
resources)
Mass restart after a failure
creates special needs. They have to
provide network defense mechanisms against floods of traffic produced during
failure recovery.l
Conclusions:
The speaker is a substitute
for the author and had limited English skills unfortunately. He described their approach to opening
networks to 3rd parties for application based on the wide
penetration of broadband.
Parlay/OSA APIs were
originally designed for open access but being based on Corba
were too difficult to use. Parlay X Web services APIs have replaced them
for access to 3rd parties.
“Telco2.0” is a design
pattern for the telco business (Comment – not sure if he meant pattern as a formal term as used by
people on software design)
He described a mashup to produce a weather forecast service. The Korean Meterological
Association provides the forecast, they do text to speech to make it voice
capable and provide some click to dial and location functions.
He described some they did themselves, “Ann Diary”, a directory/presence service, KT messenger,
an IM service, and a TV service (ran out of time to describe it.)
(Comment – it’s unfortunate that the author couldn’t
be here. It looks like they are still
fairly early in figuring out how to offer 3rd party interfaces, but
not clear.)
Rebecca is a frequent
presenter and has a new book on 3G coming out.
OSIP stands for Open Systems Innovation Platform. It’s part of BT’s 21st century
network project. Michael presented a
value chain from invention all the way to the customer, and portrayed OSIP as
covering the architecture and productisation of new
services – it’s a process as well as a platform.
The Focus included enhancing
user interfaces through rich web-style interfaces which is really aimed at
improving customer retention. This is
producing feedback to evolve their 21CN architecture and APIs.
Second focus was on
strategies to extend the experience (more devices with consistent experience,
more applications on the same devices.).
OSIP is a Nursery for
services – this is missing around the industry.
The idea is to explore and grow services that will empower BT to be a
leader in the next generation. Nurseries
have doors, and that’s the end user interfaces This is
based on “True IMS” not pre-IMS. They
are trying to test whether IMS actually meets the needs for a platform for new
services.
They really wanted to explore
video – video streaming, handover between endpoints, Video calling, IPTV and
rich voice together. Multimedia
Conferencing.
She went through Multimedia
“Ring back tone” – getting a personalized greeting for callers. They created a way for users to easily create
their greetings, but they didn’t address any legal issues that might arise (Intellectual
property, decency/obscenity laws, etc.).
OSIP “BT Live” portal –
advertising based portal site looking at controlling your services, idea is to
displace sites like Yahoo and Google as home page.
Learnings:
(Comment – “Feature Interaction” was one of the big issues in the voice
world that never really got solved. Web/Internet
services generally ignored it as something that they didn’t have to worry
about. That’s not entirely true –
identity issues and conflicts over devices still happen. One thing that makes interactions
particularly nasty for communications providers is that the world of the Telco
is all about services that sit in between users and therefore have to deal with
the perspectives of each user on the interactions, while the web world
generally looks at it as the user deals with a single service provider at a
time who has complete control. Mashups modify this and bring feature interaction issues to
the web.)
Question –
looks like registration and subscription are big issues for network
efficiency. Stephanne
described some IETF efforts to address these issues.
Question
(Kris Kimbler):
What insights does OSIP give you on commercial deployment of 3rd
party services.
Michael – OSIP isn’t a deployment vehicle, that
goes through BT’s lines of business. It
would be nice if there were no gap, BT hasn’t figured
it out yet. (Comment – Kris worked on Moriana’s SDP 2.0
report, and one thing he told me in trying to recruit more participation was he
had few success stories in open services creation)
Question
(Orange Labs): What mechanisms allow
users to interact between different applications (she showed drag and drop
between applications. Rebecca – Their
interfaces are all Web widget based.
Some of what they did was possible because it was a closed “friendly
user” environment and they picked compatible applications to work with. She thought it would be difficult in a true
open environment because users pick the applications. Use of Open APIs isn’t sufficient to insure
that this will work because there are a lot of them and they aren’t consistent.
Question
(Chet McQuaid):
What conclusions can you draw from the Pre-IMS work towards the adequacy
of IMS to handle what people are looking at it for. Steffanne – their
work is really focused on VoIP only and their tools
were really focused on what messages would be used to support VoIP using SIP. He
is a bit worried about the use of Subscribe/Notify in some of what he saw from
OSIP resulting in scaling issues.
Rebecca agreed and said they weren’t looking at all the deployment
issues. Michael said that he is familiar
with BTs Sip deployment – one of the largest and agreed with Steffan’s analysis of the importance of tuning. Rebecca said that there are ways to reduce it
– group registration, aliases, implicit registration. (Comment
– again, filtering and segregation of registration, notification and session
control is a way of attacking it)
Hybrid mashups
combine information from Mobile systems (busy/idle, profile, location, etc.)
and internet (presence, maps, IM, profile).
Mashups use the mobile domain as a
supplementary data source and optionally as an access channel.
He started by describing the
REST approach to APIs. The problem today is much data isn’t accessible. Even when it is, SDKs and SOAP are complex
and heavyweight for the mobile data world.
REST exposes everything as a resource using a URI: /user/joe/location. REST uses
simple verbs on the resource (read, create, update, delete, and they map easily
to HTTP (get/put/post/delete).
RESTful design – use URIs to
identify users and attributes in a simple hierarchy. Each one has some challenges:
Several models for the
deployment:
Security is also a
significant issue.
Some lessons:
Who will offer the
services? Two models,
operators as bit pipes or as information brokers.
One player vs cooperation of multiple sources? Multiple
sources are a fact of life.
What will the charging
be? It has to be free for some level of
use. If you charge for everything,
people will find a way to route around the charging element. Mobile operators who charge for location will
see apps created to read the GPS information from the handset and bypass the
operator.
Profit isn’t going to be
quick or assured: It’s going to cost up
front to get into this job.
Question (Cap Gemini Sweeden) – if the operator can’t make a profit, why would
they do it? Answer –
to be innovative, to demonstrate leadership, to have any chance of making a
profit off applications downstream.
Question (Roberto Minerva)
(pointed out that in Telecom Italia, the operator has data on prepaid
users). What of the business models is
going to win? If operators can
cooperate, then it’s the broker model, if not, it will be stand alone by a 3rd
party and the operators will largely not participate.
Question (Rebecca Copeland)
there are 2 kinds of location – historic, based on what you know about where
the person usually is, and dynamic, based on precisely where they are now, how
much do you need Dynamic knowledge?
Answer – most things need real-time location. Operator systems to do location without the
device don’t scale (Comment – I’d like to
understand that a bit better). Rebecca pointed out
that many people don’t need real-time, historic data like where the person
tends to be is good enough for a lot of interesting services.
They started with IMS and
Internet and set out to see what could be done in the social networking arean with it.. They are looking for ways of using social
networks to provide services that really help the user in their current
context. Consider the problem of finding
a doctor to treat some condition – today you research it using yellow pages,
web, friends, and it’s tedious, provide some easy way to do it. Or consider arranging a dinner with someone
at ICIN if you don’t happen to meet face to face. (Comment
– interestingly enough I was part of a group trying to do just that at the
time. We succeeded, but probably missed
some folks who would have liked to attend)
Social interaction
analysis: Compute a
directed graph with weighted edges that describe how heavily two people
interact. Social proximity is a function
of the interaction attributes, the use case under which they happened, and the
context of the interaction. He is
proposing to do this in a distributed algorithm using semantic web
representation for the relationships (versus sticking it all in a back end
server).
Example: Mobile social helper. Dynamically constructs a social network from
email, SMS, and Call interactions. As a
result computes a directed social strength between the actors, then identify a relevant contact to call based on that network
and context. (Example – a mother trying
to locate a missing son has a best friend recommended as a contact). (Comment
– I’ve never understood how people get past the privacy issues in social
networking. Yes, I use some of these
sites but instead of being delighted when they turn up a “friend of a friend”
who happens to be someone I know I feel like I really wish I knew what else was
going on so I knew what else these sites know about me. Maybe I’m just paranoid)
Example: Web Meeting Room Assistant – using Location
or RFID tags the system identifies the people in the meeting room with
you. When you open your laptop you get an
email with a link to a view of all the users in the room, and can drag and drop
documents to share them, store the community for later, retrieve
information the users share. (Comment – again a little spooky, but less
than some others, and it does sound useful for several purposes, including just
jogging my failing memory for names and faces to identify the people. I seem to recall reading about someone’s
research on a social assistant that would identify people you spoke to and
provide a “heads up” display with their name and the context of recent
interactions you have had. That could be
done without releasing any information the others would consider private, it’s
really just a better memory).
Question
(not sure who) – in his opinion there is little real market for this, but it
makes an excellent tool for spying? How
do you put in a border between private and public information? Privacy is their top question (Comment – yes, it’s the one I was going to
ask too) His example was Facebook – there are privacy
controls and users choose to share anyway.
(Yes, that’s the concern)
Question
(Kris Kimbler) – There was a study in Denmark that in
spite of high concerns on privacy something like 30% of the people have
sensitive information (bank accounts, etc.) on social network sites. In social networking you have to give
something to get something. One thing
that social networking can do is automate a lot of things people do now by hand
– if the mobile system detects you are in Bordeaux it can update your LinkedIn, FaceBook or Plaxo information to reflect that instead of forcing you to
do it all by hand.
Comment
(Me): The privacy issues will have to be
addressed if/when they lead to crimes committed against people using private
information shared through these services.
It then becomes a legal issue to decide whether the operator took
sufficient care in protecting what the user thought was private and even
whether they informed the user adequately of the risks.
Question
(not sure who) how do you identify real social networks? Answer – communication is a great tool to
uncover who you really interact with. Facebook et al are artificial, they record who you chose to
connect to, but who you talk to using your mobile phone shows who you really
talk to. (Comment yes, but that may reveal things that you don’t want others to
know)
Rebecca – all very well to
build a service like social networking, but how do they make money/ Today, it’s advertising, everyone hates
advertising. She doesn’t see how social
networking services will make money.
People use them on the web now because it’s free. Answer – within 10 years this will become so
valuable to people that people will pay because they have to,
it’s part of their life. (Rebecca –
hasn’t happened for search engines).
Comment from the audience – there are only two business plans for an
operator when looking at a new service:
Ignore it and become a bit pipe with respect to that, or participate
early and MAYBE they will make money.
Kris – Google doesn’t make money on search, but by being there it gets
to make money on other things (advertising) this is going to be the same thing.
Roberto – there is a company
in the
Chet – how about a more
passive approach? Instead of a panic
button that an elderly person uses to get help, how about something that will
tell the elderly person who is “near” so they can decide whether to ask for help? (Comment
– the problem here is that the privacy issue here isn’t for the elderly person,
it’s for the friends who may not want their location shared.)
Content delivery is the most
challenging job he has had in many years in the communication industry. IPTV means TV over a controlled IP
environment, vs Internet TV – best effort over
uncontrolled path.
Content deliver can be done
over controlled content networks versus Peer-to-Peer. There are different sources (broadcast,
movies, webcams, phones, etc.) and different outputs (TV, PC, mobile phone,
cinema)
He started showing the call
flow for SIP using proxy servers to intermediate between endpoints each using
DNS to find the other. Today SIP uses
centralized servers to do this (known as rendezvous). P2PSIP uses a distributed hash table
implementation to distribute the information among the peers in the network and
use a hash to figure out where to start looking (Comment – I heard someone, maybe even this speaker, talk about this
before in a local presentation in Illinois.
My reaction at the time was that you could do this, but I didn’t see the
benefit. Maybe I will now?)
This is being standardized in
the IETF under the name RELOAD, the P2PSIP protocol. It has built in security and NAT traversal, you can’t not implement it.
It will support other protocols, not just SIP, and you can plug in
different dynamic hash table algorithms.
The architecture is based on
a general overlay network which maintains information identifying the other
members of the overlay. It’s built
fundamentally on TLS/DTLS – can’t avoid security, and the members of the
network have certificates to validate identity.
The issues include:
Question (Huilan Lu, ALU) – how are the certificates
distributed? It’s all done by the
registration server in the same transaction that registers you. There’s no extra overhead for it.
Question (?)
How does this related to standard SIP – not changing RFC 3261, standard SIP,
changing 3263, which defines how you rendezvous between SIP endpoints which
describes the mechanism using proxies and DNS.
The protocol solution is much more general than SIP, but the IETF
charger is limited to SIP. He is walking
a fine line trying to keep it open without wandering off that track.
This work was actually done
by her as a “high school” project as an intern years ago? (In a
later discussion I learned that “high school” in France really means a
technical degree of some sort. In fact
she had a Ph.D. before doing this work and had gone back to get the “high
school” degree, doing the project just before becoming a full-time employee of
Orange.) IPTV is beginning to impose significant bandwidth demands on
operator networks. The objective was to
find ways to reduce this impact.
Consider Video on Demand –
the user browses a catalog of programming then requests a video. With a central implementation the video is
delivered with a unicast flow containing the entire
content. They were looking at using well
known techniques, like P2P to provide this with no assumptions about popularity
of a particular piece of content and evaluate alternatives.
They modeled the network in 3
pieces, Backhaul, Core, and Access. With
a central model each user has a dedicated connection to a server for each
content delivery – this means high network load in every piece and a large
number of servers.
The content delivery Network
(CDN) model introduces surrogate servers in the backhaul network. Before
delivery the content is transferred to the surrogate server and delivery is
from the closest surrogate. This model
has better scaling in bandwidth and less load problems, but requires lots of
servers and this provisioning.
In the P2P model there are
almost no central servers, content is shared directly between endpoints. This scales, but there is a lot of
communication required, so they looked at alternatives, like using servers in
the backhaul network to deploy content but using p2p to distribute the content
to them.
They developed a cost model
based on transmission, servers, routers, etc.
Then plotted cost relative to the central model based on the percentage
of clients that request the same content.
To address this they used a
Hybrid approach, with the central server deciding which model should be
used. (Comment – I have no idea what is a realistic “popularity”. I suspect it depends on specific weightings
to the various cost elements as well as the structure of the distribution
network.)
They built an architecture
that would switch automatically switch and patented it.
Question (me). Is the crossover point something that gets
achieved in practice and how does it depend on the relative weights in the cost
function and the network structure? It
turns out it’s really 0.12%, not 12% (presentation confusion). Ove actually said
in Denmark they have had many broadcasts than 10-30% of people watch. The paper covers the network structure, the
relative weights are proprietary.
Two kinds of Personalization,
one alters the logic of the session, and another changes the presentation or
application. Personalization of audio
services has largely not been successful because it’s intrusive and limited. Video offers more opportunities: Information banners during a video call,
Advertising, including targeted advertising and sponsored communication, and
customized displays. (Comment – maybe what’s going on is that
Video isn’t linear in space, giving you opportunities to put stuff around the
video. Audio is, there’s really not much
most people want to do to it, though I have heard people talk about compressing
audio or doing pitch conversions either for speed or comprehension or just to
get around hearing problems.)
Where can the composition of
video (e.g. Mashup) occur? Not in the terminals, too low power, not
accessible enough. It has to be in
servers somewhere. Has to be open to 3rd
parties, leverage the infrastructure available and respect SLAs
and user interface. (His example mashed
up an overlayed logo and a “ticker” bar for text
information on a video stream.
What they did was create an
architecture where a customer using a portal describes preferences. These and the service preferences are exposed
through one API, There
are servers which do the Mashup which expose another
API with capabilities. The service runs
in an external server which uses all other information (presence, location,
subscription information, etc.
Users have always on and
connected terminals through Telco networks, IP networks and mobile
terminals. (Comment – your reaction to this probably shows how communication in
your country differs from Europe. The
Europeans really have reached this point.
In the US we are connected much of the time, except when we travel
outside the US and can’t use our mobile devices unless we have pre-arranged it
and pay exorbitant roaming fees or use local devices which lose your
context. In other parts of the world I
suspect that mobile data is still a ways off) Context includes location,
time, device, and what you are doing.
The objective here is to allow preferences to be expressed which control
how content is shared and made available
He described an architecture
which aggregates all the raw data available about what, when, how, etc. the
user is acting to extract the user context, then make that available to content
delivery services which would customize (cast) the content. This was trialed
for content sharing in Venice.
Kris Kimbler
moderated this session. (Kris has a long
history with Parlay and Parlay X as the leader of a major application server
company, more recently working for the Moriana
Group. The Panel consisted of Roberto
Minerva from Telecom Italia, Augustin Nunez Castain (Telefonica), and David Pecota (Microsoft).
Kris has only 3 questions for the Panel: Why open networks, How to Open
them, and when.
Microsoft: Why is about new business opportunities in
working with 3rd parties. It
generates new revenue. It also allows
you to deliver a better end-user experience.
Telefonica: Why falling
revenues and the need for new sources
Telecom Italia: in 1989 IN got started to solve an internal
control problem. After that there was a
long stream of efforts (Tina, Parlay, Jain, etc.). The industry was on board for 20 years, but
nothing really happened until Web2.0 demonstrated it. He couldn’t convince management there was
enough reason to really do it. The
question for the communication industry is why Google, Microsoft, etc. have
success with open networks, and telecom doesn’t. Roberto’s view is that the web is focused on
data and information – they are a global database. Data without APIs to access it is
useless. Opening up was imperative.
Augustin: -- being open isn’t just about APIs, they have to be
easy to use. When Telecom opened APIs
they were hard to use.
Roberto: You have to match the way that the “web
programmers” are comfortable with – we saw the “REST” interface yesterday,
treat everything as resources. It’s a
stateless interface. The web/IT
programmers are used to client/server, and stateless interactions. Our interfaces for programming call control
are state oriented. Another point – Web
interfaces are free and usually anonymous.
Telecom interfaces are almost always for a fee, (sometimes for a large
fee.)
Kris: Web programmers don’t understand telecom –
they think it’s boring and simple compared to things like Maps and Social
networking.
David: Microsoft thought about building a platform
from the DOS days, and had to deal with the reality of not being in end-to-end
control. They learned the benefit of it
and have continued to expand APIs and capabilities as they went forward. He participated in an SDP summit in 2001 and
made this point. Doesn’t know why Telcos haven’t done it, but they need to see the benefit
and the value of the things that they can provide.
Kris: Microsoft sees business value from APIs and
getting applications on their platforms.
David: Yes, and now Services are a part of the strategy too.
Question (Aepona): If Telcos provided a rich set of interfaces would adoption be
any better? He thinks not. He thinks it’s a marketing problem – “Build
it and they will come” – well they didn’t come.
Telecom needs to market those capabilities and build the market for
them. To be successful they have to first demonstrate what can be done with the
capabilities people have now, and nobody has done that.
Kris: Well we have a problem because Facebook and Google don’t advertise, people just find
them. (Aepona
– use of APIs is Viral, people get excited about it. His son works for Myspace
and when he tries to talk to him about Telecom the reaction is “it’s
boring”). (Comment – Is the problem that Telcos really
have nothing to offer, or we haven’t figured out how to offer it.)
Augustin: Why do
developers have so much interest in Iphone? Because it’s global, and
they make money on it.
Roberto: Operators are investing a LOT of money in
infrastructure. IP is very risky to
operators. IP traditionally has meant
intelligence at the edge, and there’s a big risk that all the operator’s
investments will be bypassed. Used
Google Maps while driving to Bordeaux – they could locate him within 2KM with
no agreements or funds paid to the mobile operator.
David (when asked what should
be exposed) – call control, location, authentication, billing,
Augustin: The hottest
capabilities in web 2.0 (location, presence) used to be telco
capabilities. We need to recapture the
initiative in exposing them.
Kris: Nobody is talking about call control.
David: Microsoft is working closely with the
enterprise players (PBX, IPPBX), and that includes call control.
Aepona person:
Described one of the things he worked on for the health care industry in
the UK – they have predictive models to figure out how much care they can give
without going over government reimbursement limits which they want to use in
appointment scheduling, and really wanted to incorporate voice and SMS in their
appointment scheduling
Comment from audience: Consider experience with SMS in Norway in Sweeden. One country
had much more business for SMS early – why?
It was cheaper. Eventually cheap
SMS became Europe wide, and it had APIs so everyone could build to it – it’s
ubiquitous. Old habits die hard – When
you ask people to vote, Swedes dial the phone, Norse use SMS. (Comment
– and Americans go to a web site!)
Augustin: Developers
are lazy – if you have one API that works everywhere they will use it. Hundreds of APIs each of which works
differently and in a different place isn’t interesting to them.
Roberto: his wife does not like technology, but spent
a fortune on SMS. He bought her a
blackberry – 10 Euros gave her the terminal and email (Comment – WOW! I think
Blackberries go for at least $100 in the US and the data plans are much more). WiMax is
interesting competition – 25 Euros gives you unlimited data and roaming, Voice
is 10 Euros more.
Questioner: Issue is business model, has to be the same
at least on a country-wide basis.
Wrapup statements:
Roberto – presence is about
your social network. You install and use
the one that your friends have because it’s valuable to you. (i.e. it’s not just
about whether your phone is on or not).
As terminals get smarter and capable of supporting applications, users
will download their favorite presence application and use it. Operators don’t own the customer, they are
enablers.
David – Not a black or white
world, it’s about partnerships. They
worked with the Number 3 mobile company in France to integrate MSN messenger in
the phone client. They were quite
successful with it. Working with Orange
on Windows Live Messenger, they developed a shared service that combines the
messenger client with capabilities from Orange (back office, billing,
etc.).
Augustin: Don’t focus
on your platform, focus on the developers, what they want – interesting APIs
that are global, free, and available and allow them to get revenue. Has to have SDK and open source applications.
The work was on general
service capabilities, but the presentation will focus only on presence
information. Presence is a set of
attributes that provides properties of “presentities”. A Presentity
is a uniquely identifiable entity with status.
There are watchers of presentitity
status. (Comment – I presented a concept like this, multiple identities for
presence ,in a VON event in about 2000, and also had an ICIN paper on it in
2001, though I don’t know if it was published since I could not go. I was told the concept went back further.)
He described a classic
subscribe/notify model of presence information sharing, using XML based
definitions of what is of interest and what is being notified, and an XML based
scheme for defining permissions. A major
problem with this service is the amount of traffic that is generated for the
notifications.
His solution used a group
manager to store the presence information and manage it. In addition to presence it could be used to
monitor status of many things, including providing an on-line betting forum?
Question: (Roberto) What is the security. Answer: The XCAP protocol (an XML based
presence protocol) uses TLS for security but it doesn’t support all of the
things his scheme does.
Question: (Chet McQuaid) If betting is a possibility
would it also work for customer surveys and ratings? Maybe.
Presence enablers available
today include SIP/Simple PIDF and RPID (data formats for describing presence.),
3GPP defined iFC to describe presence, and OMA, which
has many specifications for presence.
The problem is that it is
difficult to set status because there are too many ways to do it, and the result
is there is no motivation to use it. (Comment – I rarely fool with anything but
the automatic presence in applications like Skype. I don’t think it’s the underlying stuff that
makes it hard, I think its user laziness.)
Two classes of interfaces are
competing for many things in web services, SOAP, and REST. He indicated that where both are available
something like 85% of new applications use the REST form. Currently all the presence APIs use SOAP,
which may be difficult to use. His basic
proposal is to use a REST interface to express presence.
Somehow this was combined
with the use of Avatars (cartoon figures) to describe presence. (He cited some RFPs
which translate presence state descriptions into XML, but I don’t see where the
avatars came in).
He showed a video that showed
the user interfaces for a couple of applications including a Microsoft
scheduling applications modified to make use of the IMS presence service using
REST.
Question (Ulrich Rieber): Has
NTT extended this concept into other services like shared TV watching. Answer – REST is a difficult interface for
telecom operators because it’s not very secure so there is a reluctance to put
a lot of information in it.
Question (Chet McQuaid): Did you look
at the traffic implications of allowing people to finely describe their
presence (mood) and would it swamp the presence server. Answer – this is future work.
How can we do more than just
busy/not busy to enable more value? The
carrier knows the user’s status and the user’s location,
if we can add the location’s context we can enable a lot of different
services. The idea seems to be to
associate properties with particular location and automatically infer
information about the user’s context.
The owner of a location specifies the context of the location (i.e. what
it’s for, why people might be there), and the technology identifies when people
enter a particular location zone.
There are a lot of existing
technologies for location, they mainly didn’t do anything new there, but they
did look at Bluetooth as a location technology – put a small Bluetooth device
in a location zone that will connect with and identify devices coming into
range as a way of noticing who is in a particular area.
The next thing of interest is
registering a zone, and the process depends on technologies, for example, with
cellular triangulation, you don’t have to install anything to create a zone, but
you have to describe the location in detail.
With RFID or Bluetooth you don’t have to describe it, (the location is whatever is in
range), but you have to install the receivers in the areas you want to have in
the zone.
Different locations drive
different communication practices (e.g. you may want calls on your home phone
if you are home but your cell phone in some other areas). You may have a preference of SMS or voice depending on where you
are and a service that can translate between SMS and voice. There are also quiet zones ringing phones are
prevented. If you had the location
technology to infer that a person was in a quiet place you could automatically
change the person’s phone profile in response.
She described an architecture for this that first identified from raw data
what zone the user is in, then filtered that information for significance
before passing it on to the presence server to avoid swamping the network with
location updates.
An interesting scenario –
detect the presence of multiple people in one place at work as a meeting and
automatically change the zone to a quiet zone so that calls will stop going
where people don’t want them.
There are other uses possible
– detect when a person is in a store’s zone and use that information plus the
person’s past interaction with the store to determining what advertising they
get.
IBM started down the path of
Bluetooth based location when they couldn’t get other location
information. There were lots of
objections to making it work, but she said that the technology does allow rapid
identification of a Bluetooth device provided it is on. On average 10-20% of people carry an enabled
Bluetooth device. More people do so in a
business setting.
In response to a question
about the desirability of advertising she said a lot depends on how you ask the
question – ask people if they are willing to get “push” advertising and 85% say no. Ask if they would like to get discount
coupons based on location and 90% say “yes”
Question: How do
you make this work across many handsets. Looking at standardization.
Question: Who
defines Zones? Anyone.
Question: Do
you need SIP Handsets? No, SIP is used internally, no requirement on
how the handset communicates with the network.
Question: Aren’t there privacy problems? This has to be a buy in kind of service. People are beginning to realize that the telco knows where their mobile phone is all the time. It’s only scary if people feel they have no
benefit from it. Lots of applications
are gaining acceptance because there are benefits (e.g. finding family members
or tracking Alzheimers patients)
Question: With multiple technologies for detection how
do you handle cases where they produce different answers? Answer – they put a lot of effort into this. They have established a default priority for
the system but provide for applications to set their own priority. For example, some applications may care which
room I am in in a building, others only that I am in
the building or not.
Question: What
about overlapping zones? Answer – this is like the last issue – a person can be
in multiple zones, and the application gets to decide what it is interested
in.
(Comment – this may be the best “Presence” paper I’ve
seen. The services are useful, the
technology is plausible. Where do I sign
up)
General
Questions.
Question from Bernard to NTT speaker: The mood
indicator has a potential misunderstanding.
If you way you are sleepy and forget to remove it would you get in
trouble with your boss? Should the
system automatically remove moods over time?
Answer seemed to be that for him changing his mood was not
difficult. The IBM speaker suggested
that maybe mood should be attached to who is viewing presence (i.e. to your
boss you are never sleepy)
Question:
Sometimes it would be useful for the user to get notification if they
enter a particular zone, some times it is interesting
some times it isn’t. Can it be done? Yes, it could
be done with an ICON on the phone or an SMS or USSB message to change
status.
General Question from Bernard: For all, what are the privacy/security concerns. Answer (IBM) Separate privacy and security. Security is about using secure networking,
that’s easy. Privacy is context
dependent. Some information I want to
share, other information I don’t want to.
Applications need to have a richer definition of privacy rules. For Facebook you
set your status and everyone sees it, what you really want is tthat you set status and the people you allow to see it
will see it but nobody else. The
standards support this kind of selectivity even of not all applications do it
now. NTT – our prototype just focused on
how to use the API. Privacy is future
work. OMA supports privacy rules, but
implementation is difficult.
The paper was presented by
one of Claudio’s colleagues, M. Valla. The premise is that mobile devices have gotten
powerful enough to have lots of applications available and downloaded to the
point where choosing the application has become difficult, as has customizing
it. Having the network do so based on
Context has value.
He presented two applications
based on SPICE, a recently completed European research project: The dynamic desktop, which chooses which
widgets to display based on context, and the terminal
manager, which synchronizes content with other devices based on what network
technology and battery condition is available.
The architecture included an
XML rule engine and a collection of sensors that allow it to observe relevant
conditions, like network configuration and system configuration. There is a GUI which allows creation and modification
of rules. The engine activates
applications based on the firing of rules.
There are 3 conditions supported, Clock (periodic), value (when
something achieves a specified value, and change (when some condition changes.
The specific rule language
was chosen to be simple, XML based, human friendly,
and satisfy W3C standards for XML languages (RIC).
While they have a user
interface to allow users to specify rules, they really want to automatically
generate rules based on user behavior (e.g. if I check email at a particular
time then it should be able to infer that I want to do it all the time. The interface to this has 3 capabilities –
view the behavior detected, view the user’s rules, and view the inferred rules.
They have prototyped this, but
apparently not trialed it on any large scale.
Some of the things they want to do going forward include:
One basic capability
providers need is feed aggregation, where information from multiple sources
gets integrated into a single view. He
gave netvibes as an example, which displayed many
different kinds of information each in a small window. The information comes from feeds supplied by
different web site providers. This is very
oriented towards data, not services (Comment
– not just that, it’s really oriented only towards sites that export data in a
compatible format.)
Orange sought to use the same
concepts (aggregation and widgets) to aggregate different telecommunications
services.
The main components of their
services include a collection of client components in Java script, including
parser, downloader, and preference manager, and the server side consisting of a
database that includes preferences and user information and services.
The
key success factors for IMS are not only the availability of IMS clients, but
the user experience they offer. This
requires a client for both mobile terminals and for fixed client.
There
are no standards for clients. The focus
on standardization has been on the network, but the client is where the user
experience will actually come from. The
mobile phone has become a very powerful computer, and it’s like a laptop
without the benefit of a popular definition of an operating system or
interface.
The
state of the art includes some core APIs for IMS capabilities add
services. One alternative is to put a
Java client on top of this that works on the device. Another alternative is to push the client
implementation in the network, using only browsing/display capabilities in the
device. Service invocation is now
through network capabilities rather than APIs on the device. This is more general and more easily
adaptable. This uses JSP (java Server
Page) and cascading style sheets to handle different terminals and
capabilities.
They
did a proof of concept as a joint project with ALU and others, demonstrating
the universal user interface. The worst
problem was that it was cumbersome – too many clicks to get to where you wanted
to be.
Noted that the audience here
is organized neatly in rows, a computer would probably order people by height
or alphabetically, but a human being might prefer to organize the room
according to what they are trying to convey to the audience (customers in
front, vendors in pack, single women in front, others in back, etc.) The trouble with mobile devices is that we
have exceeded the ability of human beings to visualize the information on those
small screens based on computer provided organizations. Humans know how to navigate spaces in 3
dimensions. We can handle horizontal or
vertical navigation on devices, but not both at the same time. That produces a keyhole effect (region you
are working in is too small). Web pages
are designed for big screens. Ideally on
a small screen you want the information of interest to come first, but
typically the upper left corner of a page contains useless stuff, and the
information of interest is well down.
Some systems (iPhone, Opera mini) have managed
small screens pretty well and learned some lessons:
He showed some trends in
screens. Resolution (pixels per inch) is
getting denser, almost to that of paper, and “landscape” phones are becoming
more popular (landscape mode works better to fit the web.
Showed a couple of music
players, which use a hierarchical menu – the problem is most of the stuff they
show is useless, menu levels and choices you aren’t interested, and the few
items you get to see are cut off at the right.
He showed a demo using the
music itself as the interface (album covers) you organized it yourself by directly
manipulating the icons on the screen with your fingers. You could get a lot more information onto the
screen and a lot more of it was relevant.
I presented a set of mobile
needs and a comparison of 4 architectures to serve them. I got relatively few questions on it. My main messages are
Many users won’t use the full
capability of their phone because they wont download,
install, and manage applications. What’s
the solution? The buzzword is widgets –
small applications that can be downloaded and installed easily, but it doesn’t
quite work.
What’s another problem? Personalization. Personalization enhances business by making
it easier for the user to use services and by making it harder to switch
carriers and lose it. Personalization
can be done explicitly or implicitly, but explicit personalization takes effort
from the user..
What they provided was a personalization framework that automatically
configures and personalizes the user’s communications through widgets without
requiring explicit action from the user.
You obtain new applications
by dragging and dropping widgets off a browser screen onto an image of your
phone, then the next time you synchronize the phone the applications are
automatically installed. Usage is
tracked for how often and how, and the result is automatically
personalized. Next time you go to configuration
the system will use this information to suggest additional things you might
want. (Comment – like recommendations from
Amazon).
This meets the needs of
casual users well, power users want to manage them
themselves. There are security issues,
especially with giving widgets access to local information. (Comment
– yes, this is exactly where Microsoft and others made the mistake that gave us
a host of security issues and the need to do virus scans. You might solve it with some kind of sandbox
environment. I don’t know).
He said that the next step is
going for standards of widgets. Not sure
where, Bondi group (standardizes widgets) or
W3C.
Question –
you show this being done through the network, can you do it locally? Yes, but not clear why.
The real need for new
services is to be easy to use, easy as the telephone. Second, new services have to be home
electronics friendly (that means it doesn’t require a PC, the idea is to be
able to control a DVR or other device remotely using only the telephone as a
keyboard). Third it has to be NGN
friendly, working with SIP and SBCs.
Some example applications
included remotely setting up recording on a DVR, transferring pictures to a
digital picture frame, and remote office.
(Comment – interesting, just last
year I considered buying a digital frame for my 88 year old mother and rejected
the idea because controlling it was beyond her.
Not sure that this would help a true technophobe)
The architecture here uses a
residential gateway, which includes connections to the NGN, the telephone and a
home LAN. The electronic devices and PCs
hang off the LAN. In order to enable
communication across the NGN, two LANs are joined by a VPN tunnel (Comment – the home gateway here is very
similar to a device being designed by a small company I have worked with who
were looking at it as an extension of a DSL or Cable Modem.)
The service uses telephone
dialing and numbering to create the VPN tunnel and authenticate and approve
it.
Question (Bernard) Do you need a special client on the electronics to interact with the
VPN? No, it’s transparent.
Question (Max Michel,
Talked
about RFID and characteristics of RFID. The Near Field Communication
Forum defines standards for all of this (I guess that’s what NFC in the title
meant.) The focus has been on payment
and ticketing as key applications.
Security is a key issue for
this.
One scenario is associating
some digital media with an object – a song attached to a bouquet of flowers,
which plays on their music player when it gets in range. Another example was one where the user would
touch his mobile near an advertising billboard and pick up information related
to that poster, then take the phone home and touch it near his TV to play
it. (Example, a
preview of a CD, movie DVD, or concert, which after a few minutes would offer the
opportunity to buy it.
He went through the
architecture for the touch example based on RFID. The touch interpretation system interprets
where the user touched the phone and what the context is, then
feeds that to an application which decides what to do, then interfaces with a
media system to launch the download.
He presented different
business models for advertising based services, Pay Per
View (PPV) Pay Per Click (PPC), and Pay per Action (PPA). Pay Per Click is
relatively easy to monitor but very subject to fraud. This is the major model used for
advertising. Pay Per
Action is based on the user actually doing something in response to the
ad. The action is reported by the
advertiser. It has a better measure of
results (but is hard to do and subject to fraud by the advertiser.)
The main problem with PPA is
social – Advertisers may misreport what they are getting as actions and in
effect cheat. Also the advertiser bears
the cost of collecting and reporting data.
The solution proposed here was assuming that the “action” would result
in a phone call, allowing the telecom service provider to detect and report it
as a neutral 3rd party. (Comment – this just struck me as strange,
since if the intent is for people to pay for ads that result in business
transactions, these days that is much more likely to occur as the result of a
web click than a phone call. It struck
me that you might be able to build a 3rd party mediator that
monitors the customer’s internet traffic and detects a successful purchase.)
Session Linkage amounted to
two customers sharing the same session (e.g. sharing a web session). The concept was to use a SIP session to
authenticate both users and then allow them to create a shared session.
This session had the
unenviable role of opening the day following the Gala dinner, and as a result
attendance was lighter, and my personal attention less focused than this topic
would normally warrant.
Roberto organized his talk
around 4 myths about the network. The
first one he addressed was the “myth” of network transparency. Why do we have myths? -- Because we want to simplify the view of the
situation for people so that they do not have to worry about unimportant
details. We assert networks are transparent
because we don’t want programmers to worry about what is happening in
them. In fact a lot has to go on inside
the network to enable this. (Comment – true, but that’s not what I know
as network transparency, which to me means only that the network will not
interfere with what is sent through it or discriminate based on which endpoints
are communicating or what they are communicating)
One problem – Abstraction
Inversion – too many concepts wind up being handled inside layers of
abstraction when in fact programmers need to deal with some of the details and
wind up re-inventing the things that were abstracted away. He gave an example of a system that
abstracted signaling (SS7 and MGCP) away from the programmer in an abstract
interface. The service being implemented
in fact needed the details abstracted away.
Simplicity
of middleware. He quoted Kumar Vemuri
from Lucent (Comment – Kumar worked for
me and published a paper in ICIN in 2000)
talking about how while we want call models to be simple to make it easy
to control, in fact the complexity of the model (states and transition) gives
you extra functionality. You can address
this by subdividing a complex model into multiple state models, each of which
is simple.
Recent work (University of London)
suggested that in applications, specifically for mobile networks, the
application needs to be aware of a lot of the underlying structure. He gave some definitions and theorems about
converting non-deterministic automata into deterministic to reflect all of the
states underneath, then proposed to layer the abstraction (stairs), where each
level builds on the layer underneath but does not hide it. The application chooses which level it
interfaces to.
Myth number 2 – Call control
is not interesting. If Not Call Control – What else? This refers to the assertion that the
interesting services are not call control.
He talked a bit about the complexity of the data that the carrier has. I
think the message here was that the network operators have a lot to offer by
exposing some of the data that they have about customers and customer
behavior. (Comment – there probably is interesting data owned by the operator,
but the operator is very vulnerable to criticism for abusing user privacy. Another observation here is that as end-user
devices become more capable, they are capable of collecting all the same
data. Much has been said about the
ability of the network operator to offer billing and payment, for example, but
a smart endpoint can do a better job by caching several of the user’s payment
alternatives (credit, debit, Paypal, etc.) and
offering choices. Endpoints keep call
logs and address books and record usage information now.)
Myth number
3 – Networking with Quality. The problem is that we don’t have a single
network, we have a “network of networks” and it becomes impossible to have QoS. On the other
hand his statement was that you could deliver QoS
“over the top” (Comment – not sure what
he was getting at)
Myth number
4 – the need for Centralization. He presented a counter example based on Skype. Skype centralizes very little and in fact that is
distributed among multiple nodes at the top of a hierarchy.
Question: Is Skype a better competitor
than IMS? Answer – No, Skype isn’t a direct competitor. It is easy to use. If you want to compete with
He started by expressing his
interest in the fact that this conference highlights the role of application
developers. His focus is on mobile
clients that run on the mobile device.
His organization worked on a research implementation of an IMS client in
2005, and it quickly became very complex and difficult to manage. The original structure was a disorganized set
of applications on a few core functions.
They restructured it as a base client with a set of plug ins that implement core functions (tightly integrated but
separate), then a bunch of things that don’t integrate tightly with the client
(gaming, 3rd party aps, etc.) both running
on an “IMS Kernel”. It all runs on
Windows Mobile (They will talk about other operating systems.)
He described how a new
component was integrated into the client without disturbing or being aware of
proprietary information in the client.
He talked about their plug in
GUI framework. (Comment – this stuff didn’t look easy.
It’s a fairly complex structure, and they didn’t address any of the configuration required, it’s all done manually). Unfortunately it looks a bit like reproducing
a lot of the complexity of Windows on the end-user device. Maybe needed for some things, but I would
hope that we have learned something from the desktop experience and can do a
better job with our mobile devices.)
He talked about the
communication interfaces and specifically that there are two levels, one aware
of all of the SIP details and one high level.
He also talked about the communication between plug in components and
how you can plug things in between pieces of the standard client to filter
events.
All of the
“client” functions, both standard
and plug ins from others runs in a single process/address space. There are applications that run in separate
processes. He talked a bit about the details
including the mechanisms for communication and how they dealt with limitations
of the windows mobile environment and the need to do things that do not waste
battery power (i.e. don’t write an application which loops looking for an
asynchronous event).
Some sample interfaces –
touch dial, a search for contacts, predictive IMS interactive messaging, and
something that displays the user’s reachability via
different underlying networks.
He talked about applications
that used only the kernel – one that showed the weather (getting SIP based
notifications of updates, which gives better
performance).
Another was machine-to-person
interaction that allowed others to send a notification to you and get a
response (example was an eBay bidding system).
All of this was in the network, no “widgets” installed on the client.
Question – did you look at what happened in parallel
in the Java world (JSR 281)
It would be interesting to see if the came up with the same
abstractions. Answer – yes, but it was hard to find a phone
that actually implemented the standard.
They weren’t stable. Now it’s
better. Most of the things that were
overlapping and similar were in the area of SIP messaging, the higher level
functions are completely different.
Question – What about “Appliance Rich Communication”? Answer – they are putting together demos and looking
at productizing (didn’t answer her question).
Question: You
apparently had problems with asynchronous communication. Others have succeeded and built similar
applications. Why?
Question: Have you
tried to integrate circuit switched call control? Answer
– yes, they did test this with call control for circuit switching underneath
(GMS) The
ability to do either was a requirement from the start.
The work comes from Telenor and is part of the Mobicome
project.
There are a lot of social
networking services that work over the internet. Many involve communication,
it would be nice to be able to do it using the internet. Web2.0 has developed a lot of interesting
techniques for building services, but they don’t necessarily get along with the
IMS world – different views on security, “real time” vs
best effort, performance issues, different user expectations (e.g. what does
“availability” mean in an environment where there are many services and many
network providers. Is something
available if only 9/10 services work? Is
99.99% availability useful if the .01% failure happens only when people really
need it?
She gave an interesting
example of presence where some of the presentities
represented things rather than people – i.e. the detection of a radio station
or proximity to a WiFi network, or a condition like
the price of a stock.
The main body of the
paper/presentation was work from her Ph.D. thesis defining new ways to consider
presence and availability. One scenario
was dealing with the conversion of presence information, in this case buddy lists,
from an existing networking service to the IMS environment. She described a structure which integrated
external presence events and notifications.
Then she applied a view of service availability which computed some
weighted view of the availability of data and importance in each external domain
to compute a view of service availability of this composite service. (Comment
– I found this very confusing because at times “availability” referred to the
notion of availability in a presence service, while at others it referred to
service availability)
Question: Won’t
the user get flooded with information from such an integrated presence service?
And don’t you need filtering? Answer – yes, that’s what the concept of weighting by
importance is for
We keep putting higher
technologies with higher abstraction into network services, but at the same
time still use the same basic physical equipment and equipment with long
service life. How to fill the gap? One key element is service composition –
putting existing services together into new ones through composition.
He presented a view where
someone might have different kinds of network architecture, Circuit Switched, VoIP, and IMS, and different control architectures on each,
but want to support common 3rd party
services on all and provide a unified experience for the customer across all
3. (Comment
– picture looked a lot like some of mine, building services that work across
multiple domains)
The objectives for
orchestration include the ability to be dynamic to adapt quickly for niche
markets, independent of the network and service layer, simple, and “carrier
grade”. He talked about SCIM as an
orchestration implementation but dependent on the specifics of IMS (HSS). (Comment
– yes, but the concepts extend easily)
His architecture was based on
a real-time Java container and had at the bottom adaptors to various network
signaling, then an abstract session control, then real-time service
orchestration, and on top services and service adaptors which connected to
specific external servers. The real-time
orchestration was XML driven and used information retrieved from an external
web server holding XML descriptions of what is done (State Chart XML, a W3C
standard) He went through a lot of
characteristics of SCML, which implements nested and parallel state machines
and actions triggered on transition, including entering and exiting
states. It’s easy to customize and can
use a scripting language to trigger actions. It is composable
(logic can invoked as a sub-routine.
He gave a simple example of
VPN and Prepaid services being used on one call using a simple script to
trigger the services in separate SCPs.
Question: How
does this relate to W3C work on service brokering? Answer – not
familiar in detail with the spec but believes that their platform is an
implementation of the W3C concept.
Question: How
can you do this without interfering with protocols? Answer – The
service and network adaptors handle the mapping of protocols into a common
independent format.
Question: If we
have to express the service we are defining as XML, don’t you need to reproduce
the underlying protocols as APIs? Parlay
mapped everything into common APIs, but the result was a very limited set of
capabilities (Too much lost in abstraction) How do you do this? Answer – the
adaptors cheat! The high level
interfaces are used for service orchestration, but the service adaptors can go
around the abstraction level and talk directly to the underlying network
adaptors to work in the raw protocol.
Communication time has
doubled in the past few years, but “fixed” has been declining, while Mobile has
been only slow in growth and the big new communication player is the web (or
internet, Skype, Google, etc.)
Is this a benefit or a threat
to communication service providers? Key
is to provide some value and capture some of the web traffic. The winning
strategy is to take advantage of the social networking phenomenon. Users are not just Consumers, but also
Producers of information (“Prosumer” – he made a
comparison to the camera market, which is the place I have heard this term
before and didn’t know that he intended it in the same way, someone between a
pure consumer and a media professional.)
There’s a lot of technology
to do this that implements mashups. In order to be responsive in this world you
need extreme time to market, so things need to be deployed quickly in
“perpetual beta mode”.
Some interesting statistics
on Facebook – 40,000 developers and 1500 new
applications less than 1 month after the API was released. Users are overwhelmed, over 4400 applications
now.
Web2.0 development cycle is
quite different. Instead of massive
testing, they expose the system to users early and rapidly respond to feedback,
fixing problems and adding and dropping features in response.
Another problem the network
operator faces is one of perception.
Stuff from the network operator is automatically seen as boring. Have to somehow change this perception.
He presented 3 business
models – one based on open collaboration, opening APIs to MVNOs
or Web2.0 companies. The idea here is
not having people build directly for your network but bridge services managed
by some other company into your network.
One interesting thing he
described was interfacing to Second Life – they wanted to allow participants to
communicate including prepaid billing.
The problem was that people in Second Life are anonymous and you don’t
want to reveal their real identity in providing the service.
A second model is for a
company to manage their own ecosystem of application developers. You can also open an API that the developers
build on and have it managed by a 3rd party.
Another example was to expose
an API which allowed a Facebook application to send
SMS messages. Roberto’s work is actually
on the SDE for this – a web based plug together environment that allows a
developer to put together widgets from a large library to create new
applications.
Interesting
quote from
Question – how close is this to commercial
launch? Answer – it’s there technically, but the key business
people have to be convinced that this is in Telecom Italia’s interests
(Security, revenue, risks).
Erik was formerly with BEA
which is where the work really comes from I believe. Who is in the room (Telecom Architects,
Business Analysis, Researches, Developers). But, who do we want to be – Service Architects,
Composers, Service Seekers, Consumer Advocates.
What’s important in the
presentation title is Agility and IMS together -- The ability to rapidly build
applications that use communication. He
cited a personal example – he keeps bees and built something that would weigh
his bee hives and use GPRS to report the data.
He referred to Agile Software
Development as the model as a goal. He
presented a structure where you had multiple silos of service development (IPTV,
Telephony, Content, etc.) then two points of composition, one using the SCIM on
the network side, and one using SOA and Web services on top. The idea is allow
each service domain to build what it needs and how it wants but to integrate
around the products rather than have to build something universal. (Comment
– this is very interesting. Complexity
and difficulty goes up non-linearly as the scope increases and if this is a way
of segmenting to keep the scope small.)
He described the Clover
coffee machine, which is the ultimate coffee brewing machine, and is internet
enabled, so you can create a profile describing how you want your coffee made
(including brewing temperature, roasting characteristics, etc.), then re-use it
somewhere else with a similar machine.
Traditionally service
deployment in telco has been 18 months. Using Agility they have gotten it down to 3
months. The Goal is 3 weeks, which is
still forever in internet terms but good for Telco
There are tons of services
and applications everywhere, the real problem is
finding ones that bring revenue.
She showed an interesting
long tail chart, which was overlain with different models in different parts of
the space – “Walled garden” for the popular stuff, Partnerships in the middle,
then some open interfaces for 3rd parties in the long tail.
She presented an SDE in
layers with different tools in different layers – service development,
composition, integration. Is that enough? She looked at several scenarios, one building
social networking tools, one doing a mashup, one
doing an enterprise portal.
When she looked at using the telco SDP for social networking she found a lot of gaps
between what was needed and easy to use and what was available. She looked at mashup
tools which were out there. (Comment there are lots of them and I
recognized only a couple. Looks like an
area to brush up on).
She showed some examples that
used mashup tools to put together different kinds of mashup tools to build things that then using a service
gateway function would connect to the telecom network and allow the service
that is built to take advantage of the capabilities of the networks
Question (Telecom Italia) – how do you handle the expectation
of reliability and does the operator become liable for problems in such a
complex structure? Talked about how this works today. The questioner followed up with some
specifics with how they worked with BT on this.
She didn’t reveal any problems they had.
Question (Bernard Villain) – For all, These presentation are all Telco centric – taking stuff from
the outside to make our services. Isn’t
this the wrong model? Google and the
like want to take Telco piece parts to enhance their offerings. Erik – spot
on, it’s about who gets there first.
That’s why Agility is key. At the worst we are just bit pipes, maybe we
get to expose some services, the ideal is that we control the environment and
create the high value services or have them created on our platforms. Roberto – have to think about the user. For web users it’s tough. Think about other markets – game market is
huge, players are moving towards networked virtual reality. This is a great market for the operator
because there are no standards or entrenched players. If we can get there with something attractive
we can participate in a huge market. (Comment
– Telcos have always been reluctant to participate in
gaming in my experience Maybe this is
just true of the US companies that descend from AT&T, but gaming was
always a dangerous thing to propose in
getting support for new services work because too many people either thought of
it as frivolous or associated it with gambling.)
He went through a pretty good
introduction to what and SDP is and why it’s hard, talking about the need to
push APIs down in the network allowing software to be downloaded into devices
and embedded components.
Clustering Basics – a Cluster
uses a bunch of processors running the same software base to serve a series of
user events. The first problem with
session oriented services is that you have to direct requests related to a session
to a processor that already holds the appropriate data, rather than a new
one. To make it reliable you need
replication of session data. You want it
in 2 or 3 nodes, not all (for performance reasons).
What are the issues for SIP Servlets:
Mix of protocols (SIP,
Diameters, HTTP, SS7, etc.
Mix of technologies
Distinct roles
Different types of
interactions (est effort,
RPC, etc.)
Different services
Different thread patterns
(blocking or non-blocking, publish/subscribe)
It’s not always obvious where
the session context is. Consider
conferencing, where the participants may not be known by the balancer. Sometimes the same service has both HTTP and
SIP components and that means two load balancers which may make different decision. Failure is another issue, if the node holding
the primary copy of the session data fails, how does the load balancer know
which other node knows that? (Comment –
that can be solved with the re-assignment of local IP addresses – the backup
assumes the identify of the primary and the load
balancer doesn’t have to know that anything happened.)
He tackled the replication
problem and presented some alternatives for finding the replicated
information. Using smart keys, each node
knows its buddy and the load balancer knows how that is computed. Another is dynamic hash tables, where a
session ID is hashed to find the primary and secondary.
He presented an API for
writing applications, one of which used a lightweight transaction protocol
where an application can specify a set of operations to be performed as a
transaction, and either all or none will happen as a result. This allows the code to be shared without a
“heavyweight” threading/concurrency mechanism.
(Comment –
nothing really much new here yet.)
Their experience is with
programmers has been positive. They like
this in providing a very lightweight way of recording permanent data associated
with a service session without a lot of overhead. It’s in production in several ALU products.
Question:
What’s the performance impact. Answer: Miniscule compared the the
SIP processing. (Comment – Performance and Java are an interesting mix. I would expect that the basic overhead of
doing dynamic protocols in Java is worse than either the concurrency or the
protocol impacts.
Question: How
does it scale? Answer – pretty well, them make use of
publish/subscribe to guarantee scaling of key pieces.
Ad Hoc networks consist of
devices with no pre=arranged structure, mobile phones, military applications,
laptops in a conference room, etc.
IMS SIP Servlets
are not immediately compatible with Mobile Ad Hoc Networks (MANETS). The problems include the business model: IMS is a centralized model and puts too many
of the resource intensive functions in central (network supplied) functions. There are resource problems as well – SIP Servlet is sufficiently “heavy weight” that it’s not going
to run on a mobile device. (Comment – I didn’t think this was true, but
he may have some specific implementation of servlets
in mind)
The proposed solution broke
the sip servlet environment down into components that
allowed it to be distributed across multiple members of a MANET. The pieces are connected via a very
lightweight interaction protocol.
They built a prototype with
two objectives – prove you can do it, and measure the performance. For a session with 2 nodes the response time
was almost 5 times as long, but the penalty got less with the number of nodes
added to the session. (Comment – I’m not sure exactly how this is
done. It looked like he was adding users
incrementally. The other question is how
many SIP sessions ever have more than 2 nodes.
Even in so-called conference situations, the sessions are often
constructed as each participant talks to a server rather than talking to each
other.
(Interesting, he introduced
himself in French, and to my untrained ear his French was better than his
English.)
The need for an SDP is driven
by the integration of multiple domains, NGN, Enterprise computing and
internet/web. (Comment – I guess it’s the same everywhere. It would be interesting to hear from some
place where there was a fundamentally different force.)
Their approach seems to be
relatively strict IMS – a Service broker (SCIM) on top of the session layer,
then internal app servers and a Parlay/X gateway to 3rd party
servers. He went through some simple
scenarios involving session control extracted through abstract interface.
Their SCIM function is based
on SIP Servlet because it’s the most widely accepted
standard. (Interesting, It must not be possible to
orchestrate based on events/messages that don’t map well into SIP sessions). The details included using BPEL to describe
the composition of service components via a service bus. The scripts are stored in a profile database
and created using an SDE. There are
controls on execution governed by SLAs, as well as
authentication. Orchestration is based
on context to select the appropriate components (e.g. implement different ways
to access a personal schedule depending on where the user is.)
They prototyped this to
understand it, but no deployment is currently planned.
Question: Your
charts showed web services used on the “southbound” side – the service running
on your platform invokes web services elsewhere. Do you also have a way of exposing
capabilities and services to others as web services? The answer
wasn’t clear, so a follow up asked whether they would expose services to all or
contract with specific 3rd parties who would be allowed to build services? I think the
answer was really focused on the latter approach. (Comment
– this is interesting, many interesting questions in service architecture and
implementation come down to “who is in control”. Telecom people generally architect systems
where the Telecom world is in control and calls out to the web world to get
things done, yet the web world generally architects it so the service is in
control and invokes capabilities from wherever it needs. I think this is a lot of what creates broad
usability – if the developer has to work in a world where someone else is in
control, he/she has to write for a specific platform and often needs some
business relationship with the provider of that platform, but by structuring it
so the capabilities are just out there and can be used openly without any
constraints on the platform used by the service you empower people to come up
with new uses.)
There were a total of 11
Poster presentations. I didn’t have a
chance to talk to all of the presenters, here are the
summaries for the ones I did see.
This was a mobile conference
service. The Mobile phone has both voice
and data connectivity. The Voice is
hauled by the mobile network through the PSTN to an Asterisk based platform
which provides the conferencing, and the data connects to a Web2.0 based
control. It provides basic setup of
conferencing, display of who is talking, the ability to add parties either as
private side conversations or to the public conference. They also provide application sharing to
display powerpoint and other
applications on all screens (not sure where the application runs). This is an evolving service they are
trialing. (Comment – two interesting things – first, the very centralized nature
of the architecture – all the voice streams go to an internal server which has
all the “smarts” to figure out who gets what audio, including really smart
things like suppression of echoes.
Second, the separation of control and transport allows them to use the
web for what it’s good for (control) and re-use the PSTN as is.)
The poster basically just
reproduced the short paper. The
interesting thing was talking to the presenter about the migration
problem. He says that IPV4 exhaust will
happen in 2 years (no new addresses left), and that it happens everywhere in
the world, mainly because ISPs from regions that exhaust first will join
regions that have remaining capacity and trade addresses. (e.g. addresses
allocated to Africa may show up in
This work exposed the session
control from IMS as well as access to certain profile data via web services,
with a server on the web side integrating the two to provide services.
This contribution basically
described a system that provided a service to deliver notification to a variety
of places – mobile phone, fixed phone, TV, internet, etc. It was described more as a concept –
notification independent of medium The system figures out how to deliver
it. (Comment
This is a nice concept. As with a lot of others it’s been out there
in theory a long time. Unfortunately,
the problem has been that operators don’t share the right interfaces, and even
when they do the huge variety of ways to produce notification on phones and set
top TV boxes makes this difficult to do in practice. The author thought this was a solved problem,
that operators were cooperating to provide it.
In my experience this hasn’t happened yet.)
This described the
application of model driven development to decompose services and then map them
onto different network and device architectures using different composition
rules (Widgets, BPEL, etc.). He talked
about building in the ability to iterate the solution, but it wasn’t clear.
This was a presentation and
demonstration of a user interface approach which provided voice, touch, and
motion as ways of making selections and supplying input on a mobile
device. The architecture was implemented
on 3 different architectures (SmartPhone, Java, and
Windows Mobile), and different devices.
Depending on the device different functions happened in the device or in
the network (e.g. speech recognition was in the device on some, but in the
network using Java). The demonstration
was impressive, using speech to select address book entries
and to fill in configuration menu items in a noisy room (maybe even more
impressive was that he was speaking in German to set some configuration
information based on dates, while the display was in English. The motion capability used optical or motion
sensors in the phone to detect it being tilted and allow menu entries to roll
in response. On some phones it also
provided some feedback by shaking the phone as things moved.
In addition to the “cool”
factor, the result was interesting because of the potential application to
people with disabilities.
Steffan Uellner and Stuart Sharrock summed up the common themes from the conference:
·
Estimating the
business value of new services and applications: There are some processes and tools emerging
as well as tools for predicting the end-to-end user experience. (Comment
– maybe, but what I heard in MANY sessions was the lack of understanding of
what services would earn money)
Stuart was able to announce
that ICIN 2009 will be the week of 10/26 in Bordeaux. Dan Fahrrman
(Ericsson) will
be the next TPC Chair, looking forward to creating the program for 2009. (The
committees held a planning session immediately following the conference and I
learned some additional details. The
plan is to hold the 2009 conference in the City of Bordeaux, probably in the
Cite Mondial conference center. Expect the call for papers to come out in
December or January with abstracts due the end of March.