This is the 20th anniversary of the first ICIN, the 13th conference, and the 8th one I have attended. Over the years there has been substantial change to the topics, from an early focus on Intelligent Network to a more general focus on services, and from strictly voice calling to multi-media services, but the nature of the conference in providing open discussion of practical ideas without product promotion has not changed. Attendance has decreased as the telecommunications industry and general economy have faltered, but there were still about 100 attendees representing over 50 organizations from nearly as many countries. Some of the key themes this year which I observed were:
- Communication is Personal. Instead of discussing services for broad categories, like residential or enterprise, we have arrived at the notion that each subscriber will want uniquely personalized services, personalized based on their profiles, usage history, context (location, etc.) and social networks. Many papers addressed aspects of this.
- A clearer view of the competition in communication services. This year the papers on business and business models presented a much clearer picture of how competition is taking place for services. In particular competition for the attention of the creative people who develop the services among network operators, internet players (e.g. Google) and endpoint vendors (e.g. Apple) is influencing the availability of services in networks.
- A Tsunami of Video riding an ever widening broadband network. We saw discussions of the growing importance of video, both as a service by itself and an adjunct to other services, and the explosion of bandwidth being deployed that will make it all possible.
- Voice is only an aspect of services. Few papers even mentioned placing and routing voice calls. Instead voice was only one aspect of a personalized service, and many services discussed had no voice path at all.
- The implications of the internet of things. We have been hearing about the explosion in the number of network connected devices for several years. This year began a more serious exploration of the implications of this trend including was of addressing, managing, and communicating with them.
- A touch of Green. Several papers and keynotes touched on how communications services may assist in the struggle to develop a cleaner and more sustainable energy industry, while one poster detailed how assets of the communication industry may be deployed in achieving a smarter electrical grid. This will be a growing theme.
- Smart endpoints + always on broadband + Cloud Computing are building blocks for revolution. This is a personal observation, not so much a common them. Many presentations addressed 2 or more of these elements and proposed as a result radically different business models or service characteristics. I believe the reason is that with the widespread deployment of 3G, the availability of many capable networks of “resources for rent”, and the current generation of intelligent endpoints with extensive capabilities and standard environments we have crossed a threshold which will allow new ways to build services and allow new competitive models.
Perhaps as interesting is what was not in the conference or appeared to be declining in interest:
- Intelligent Network. No paper was about IN and only one that I recall mentioned it. There is a realization that IN will remain a part of our networks for many years, but evolving it is no longer needed, and interworking with it is largely a solved problem.
- Building IMS. Yes, IMS was still a prominent feature of many presentations, but instead of the focus being on building or extending IMS, it was on using the IMS to build new and interesting services.
- Building IPTV. Again, Video and IP TV are important, but again, the focus is more on using them in services than on building them.
- APIs and SDPs. Many previous conferences have focused on describing programming environments. They are still important, but there has been a realization that we already have too many APIs, and the focus was more on understanding the context in which APIs are used and what they may be used for than on developing better APIs.
ICIN is in my experience a unique event in that it is not a marketing focused conference, where many presentations are thinly disguised sales pitches by vendors who have often paid for the opportunity to speak and not reviewed for accuracy or interest, and not a research conference where the papers, though thoroughly reviewed and scientifically accurate often report very narrow and advanced results which are important only to the other people working directly in the same narrow and advanced field. ICIN is a broad mix of vendors, researchers, carriers, and consultants reporting and openly discussing technologies, market trends, and other things which impact the telecommunications business. As it is a small conference held in a relatively small city, it also provides a great opportunity for networking with other attendees and developing contacts that are invaluable in doing your “day job”.
In the best years in the business ICIN has featured lavish receptions and meals sponsored by vendors and carriers looking to make a good impression, but in recent years budgets for this have declined significantly. This year the organizers decided to make the normal “Gala Dinner”, which has usually been underwritten by a sponsor, an optional event with an extra charge, and it was no surprise that a substantial majority of attendees chose to pay for an evening of wine and food in a café in the newly restored Bordeaux Opera House. The location of the conference in a large hotel in the somewhat isolated Chartrons district of Bordeaux probably enhanced other networking because getting to restaurants and sightseeing required some effort so most attendees did not leave the conference during the day and formed dinner groups for the evening. With only about 100 attendees, it is in fact quite likely that most had an opportunity to talk to everyone else at the conference.
The first
day of the conference featured 2 tutorials and a workshop on standards, as well
as meetings of the Technical Program Committee (TPC) and the International
Advisory Board (IAB), which together plan and run the conference. Because of committee meetings I could only
attend the Web2.0 tutorial, which was probably the most popular of the sessions
and most pertinent to my current work.
He gave
some statistics on Broadsoft -- 380 people, with 450
customers (50+ IMS), and 1100 staff years in code. They have a web2.0 API interface (Xtended), and have a market for applications to allow
developers to share applicaitons across their
base. He went through various Web2.0
technologies and how they were being used in service platforms such as Broadsoft’s for service creation in telecom networks.
(Comment
-- the presentation so far looked to me like Web2.0 technology applied to
"Telco 1.0" business -- focused on the network operator more than the
developer. From what I have seen much of
the excitement around Web2.0 results not from the specific technologies used to
build and “mash up” services, but from the fact that they, together with the
way that they are deployed, enable a much more loosely coupled model of service
creation and deployment. )
He
started to describe the technology, and we had a major diversion over REST vs SOAP. SOAP was
described as complicated, and REST is a reaction against it. The major distinction is that REST is
stateless and will stay that way and REST does not address many areas (e.g.
Security).
(Comment
-- initially SOAP was a "Simple" reaction against the complexity of CORBA
based APIs)
Question
(Dave Ludlam) -- What makes you think REST will stay
simple, SOAP got complicated because people wanted to do complicated
things. (View was that REST will stay
simple and only address simple things.)
Another
perspective from the audience -- SOAP requires the client and server to have
the full protocol stack and both to have the same stack. That doesn't work for thin clients without
the processing and memory to hold a full stack.
REST doesn't require much to participate.
(Comment
-- this strikes me much like the debates over dynamic typing and keyword based parameters that took
place 2 decades ago between proponents of languages like Object Oriented Lisp
and Smalltalk vs Simula or
C++. The ability to make use of an
interface without having to implement the full range of things it covers is
empowering, but it also opens up the potential for the interface to fail when a
lightweight client ignores something important on the other side of the
interface (e.g. an error condition).
That may be acceptable for many applications.)
Perspective
from one of the speakers -- Amazon was a pioneer in SOAP and REST -- they
opened APIs in both technologies. 89% of
the developers used REST. (Comment --
again, what this says is 90% of applications are simple. but without more
information you can't tell whether the 10% that didn't use REST were the ones
that people really use most.)
Perspective
from Roberto Minerva -- programmers are lazy (My words, not his)
They pick things that are easy to use and
unconstrained as possible. That's why
simple interfaces win.
Question: Publish/Subscribe and Push interfaces are
really needed. The problem with HTTP
(the underlying basis of REST) is that it's so
client/server focused that many applications are constantly using Get for
Polling – this is very inefficient. (The
speaker noted there is a potential solution, something called PubSubHub.
The
bottom line is that Broadsoft is putting more and
more in REST technology -- northbound APIs for applications are that way and
intended to cover most services.
"63% of all APIs on the web are RESTful"
The other
key technogy he discussed was Widgets. (Comment -- Widgets have their origin in the
X Windows System I believe, as such are probably a 20 year old architecture). There are different kinds of Widgets --
mobile, desktop, or web based. They are
independently built and integratable graphical user
interface elements that allow you to rapidly customize look and feel or build
an interface very quickly.
An
example given is a call log widget that allows you to log your calls and click
on them to call back. Another is a voice
mail widget that includes the ability to do speech recognization
to do voice to
text transcription. (That's done by a
3rd party). The pieces are actually
"Google Gadgets")
This is a great example of Mashups. (Comment -- A broadly usable
:"widget" technology is one of the missing pieces that I believe
prevented internet based call and voice mail management from taking off 10
years ago. When my team at lucent worked
on these things, a major problem was figuring out how to deploy the
"client", which would provide the user interface to alert the user to
new calls and allow the user to browse the log and address book and initiate
calls. There was no good way to do it
other than as a PC application, and deployment of PC applications, especially
to enterprise clients with tightly controlled software environments, was a
major barrier to adoption of the technology.
The same problem appeared in the converged desktop applications
combining circuit centrexPBX with IP voice 5 years
ago. If "Widgets", provide a
way to do this more easily they would make deployment and management much
easier.)
JSON -- Javascript Object
Notification. This is a lighter weight XML technology to
describe objects. (He also described it
as a work around for restrictions in
He spent
some time talking about Web Services vs Web2.0 (Comment – the material is in the tutorial,
and describes that Web Services is used both for an architecture of how to
build web applications an umbrella covering a host of specific protocols used,
while Web2.0 is a vaguer term often dealing with the kinds of applications
built this way (e.g. social networking, blogs, rich user experiences))
BroadSoft has had good success with their Web2.0 interface. One aspect highlighted is that many new
services are not about making point-to-point calls. Many involve calling shared sites
("dashboards"), which help users find each other either in real-time
or not. Many also involve interactions
between users and services. (Comment
-- one of the things that I noticed as I moved between building applications
for shared computers and networks, and telecom applications was a change in
perspective. Computing applications have
a lot of different usage patterns, including the "Dashboard" example,
while telecom was largely about users talking to users and the software
intervening to assist users to complete those calls, not the software directly
interacting with users. Part of what
seems to be happening in services is that we are now supporting a richer set of
usage patterns in those services)
Their
perspective is "Rich applications with Reach”, also known as
"Voice2.0". Rich applications
include a powerful user interface on top of your voice services, but reachability from multiple environments (mobile phone, dumb
phone, desktop, etc.) is also critical.
(Comment
-- he closed with a very traditional Telco pitch -- increase usage, increase
ARPU, Create "stickiness", build brands, and be able to charge for
it. This again suggest
the "Web2.0 technology applied to Telco1.0 business models.)
Question
(TelLabs guy) -- how do you secure all this
stuff? (Comment -- yes, with the telco viewpoint that's an important issue, not one I think
the web2.0 companies would focus on)
He gave some descriptions using web technologies for authentication and
access control.
Answer
from another panel participant on security -- Facebook
has a huge user community, they opened their platform
to developers. Developers register and
authenticate with Facebook and get secure keys that
allow your application to work. They
enforce rate limits on your. If you
behave, you get higher limits. They include
restrictions on what IP addresses your application can work with. Twitter has a model where a developer can pay
for higher rate limits. (Comment from
the questioner -- this allows a developer to basically be a sleeper -- You can
infiltrate the site by being well behaved in order to gain access to valuable
information before you exploit it and run.
Roberto Minerva said that the answer in the web to this is to trust the
user community to police this kind of thing -- you can misbehave for a while
but you will be noticed, reported and shut off quickly.) (Comment – “trust the user” is a repeating theme in the development of
shared computing. I believe it works
well in a small and closed user community where peer pressure is effective, but
not in large open communities where someone can “hit and run” without feeling pressure
for their acts. One of the early
timesharing systems, ITS at MIT, adopted the trust the user philosophy and all
new users were shown how they could crash the system and told not to do
it. Because this was a closed community,
it worked. The same kinds of assumptions
built into early browsing and email programs first used in “friendly” user
communities because security holes that were difficult to close when those
programs were used on the public internet)
Rebecca
Copeland – The Internet has security, just different. The Apple store won't take just anyone
(rejected things from Google) (Someone from Tellabs said that breaking iPhone security was once a big sport for hackers, and
fairly easy. The problem is once you
break in you have the keys to everything and it's basically not trustworthy). (Comment -- it may not be trustworthy, but
it's got a lot of users and it's very popular.
Microsoft software is notoriously insecure, but that
doesn't seem to have hurt them from gaining a virtual monopoly on desktops,
browsing, office applications, and others.)
Operators
have to understand the Web2.0 world -- they compete with it and they have to
address it. Two principles that telcom people don't generally understand:
-
Internet
users and applications are loosely coupled,
not tightly bound.
-
The
end-to-end argument (nothing that can be done at the edges should be embedded
in the network).
Some
other observations he made:
-
On
the web everything is data related. The
focus is on transporting and accessing data.
-
People
aren't interested in transport brands -- everything is substitutable.
-
People
aren't going to use things to capacity (not going to buy enough content to fill
an 80G iPhone)
He gave
some interesting Web2.0 principles:
-
Global
addressing
-
Web
as a service (lightweight technologies, best effort approaches, asynchronous
interfaces, plug and play).
-
Mashups (data + APIs)
-
End
of software lifecycle (everything is perpetually beta test, then
you move on to the next thing).
He gave
some perspectives on specific aspects of the technologies of Web2.0:
-
Tagging -- Tags help programs understand
how to navigate data. They allow
different communities to assign different values to the things out there -- set
your own ontology for data. Different
communities may organize vehicles differently (some will be interested in
performance, some in environmental issues, some in ruggedness, etc.) Tags allow the user to tag someone else's
data and share their ontology with others.
-
Ajax -- allows data from the web to be
download asynchronously (example given was google
maps -- the application downloads a lot of tag information you may or may not
be interested in behind the scenes as you look at a map so that you can get it
quickly displayed if you want it but don't have to wait for it up front.)
-
URIs -- Every resource is identified with a URI (this
is the global addressing scheme).
A key
problem of building Web2.0 applications is data integration – there is lots of
data in different places, how do I integrate data from different sources. (Google’s answer: "If it doesn't appear
in a Google search, it doesn't exist").
(Comment -- data integration is a huge problem, it's the problem
behind building single identity solutions, avoiding redundant user profiles,
and many other problems.)
He
described something called "Dataweb" – a
structure of data that allows people to view each other’s data in a structured
way, with access control. Depending on
who you are you get to see different levels of data from another person
(example, business card data, phone number of the company may be available to
all but personal mobile phone and other profile information may not be).
One thing
that has changed in the in the Web world is applications that are persistent in
the terminal. "Google Gears"
as an example. This makes the web
persistent without persistence in the network.
(Comment -- I think what he's getting at is that instead of relying
on the network to maintain a persistent connection and state between an
endpoint and a server, you can put the intelligence in the endpoint to
re-establish whatever is needed with the server when it's useful to them with
no help from the network).
Google
controls their complete infrastructure, then asks vendors
to supply little pieces. It is difficult
for anyone else to clone it because Google is very secretive about how they put
the pieces together. Google doesn’t go
through standards bodies to decide their interfaces.
Distributed
programs need network APIs. That's where
SOAP started. It uses HTTP and XML. The thinking has evolved and now REST is the
preferred solution for network APIs. He
said Yahoo had the same experience as Amazon – the put out REST and SOAP APIs
and 90% of the developers use REST.
"REST
is more oriented towards the web because it's focused on URIs.
(Comment -- this one I think is key. SOAP is essentially CORBA over a different
technology -- the vocabulary used to describe interactions comes from CORBA
(RPC, objects, parameters, etc.). REST
was done from scratch using the notion that all resources on the web should be
reachable, and is at the same time simpler and broader (i.e. can be used for
things that aren't RPC)).
Mashups – These are a key way to do composition. The approach doesn’t tell you why to do
composition or how things should be composed.
This is the key difference between Mashups and
the Telco view of service composition.
Telco view starts with a model of what you are going to do (i.e. provide
call control), and lets you plug in selected pieces. REST just provides a mechanism -- it doesn't
have any built in perspective on what you are doing. (Comment -- another "Aha" moment
here.)
Reachability -- anyone can buy a domain for a
few dollars and be reachable virtually anywhere in the world. The biggest Telco operator has "only 100
million or so" users. This is why
Web companies can grow so rapidly. (Comment
-- and why developers are so much more interested in building for them.)
Example: Strikeiron -- a
company that combines APIs and Data. (Comment – the company provides a lot of
little applications that use web based data to perform simple business tasks,
like tax calculations, address and telephone number verification, SMS
integration, etc. These little
applications are designed to easily integrate into a web based business
solution.)
Perspective
from Dave Ludlam -- Operators can't behave like
Google and others on the web because they have legal liability. Google collects and sells lots of data about
everyone without permission. (Comment,
yes, but what this tells me is again that the Telco model is in trouble. People have voted with their mouse clicks to
let Google do this because Google does what they want. Google wasn't the first and won't be the
last. The Telco model has been to give a
company a lot of power over the user and hold them legally responsible for what
they do with it. The Web model is to let
the user decide who they will give control for and let the user decide what to
do when that power is abused or just when that company no longer is living up
to their expectations.)
Cloud Computing.
The network is a set of networked resources that can be freely
aggregated to support applications.
Cloud Computing is simply a structure put on top of that to make it
easier to do this. (Comment -- this
is another repeating theme here -- architectures built by conventions applied
to what is essentially a wide open underlayer. One of the first places I saw this pattern
was the original Emacs editor. The original Emacs
was defined by a set of conventions for things like tracking user preferences,
file locations, etc., and how to share information between little applications
that did things like searching, replacing, formatting, etc., all run on top of
an essentially open layer that implemented the basic file manipulation and
screen display. It prooved to be powerfful because
it empowered dozens of people and eventually probably thousands to contribute
their own little applications.)
An
example is Amazon -- assembled a cloud computing solution to support some
applications that competed with Google -- A9 Search engine, street view,
etc. The applications failed, but they
could get value from the infrastructure by renting it out (cloud computing). Now they rent it to Freeswitch,
which is an open source softswitch that provides
telecom services that compete with the operators. (Who would have though a few years ago that a
bookstore would be a major telecom interfaces).
Availability
-- cloud computing is experimenting with active/active and N+K availabilityto provide very high availability, but with a
best effort network underneath it the user's availability will be limited by
the network. (Comment -- well, maybe,
if the user participates in the high availability solution with redundant
connectivity to the cloud they can achieve high availability without high
availability communication, but it takes more work than most think.)
A paradox
in cloud computing -- Cloud computing providers have started to provide a value
added service (high availability, scalability, etc.), but wind up delivering a
commodity solution.
Social Networks -- creating lots of value for
users. Problem is there are many of them
and they don't interoperate. Google has
an answer "Open Social", which integrates them. (Comment
-- Google has a strong motivation for this because the growth of social
networks is a threat to their business model Social networking sites have the
potential to deliver searches personalized by the experience of your trusted
friends, while without knowing your social network Google can provide only a
generic set of results).
Rebecca
-- Looks like what happened with Instant Messaging. There was a big push to create interworking,
but operators just wouldn't provide the interfaces and in fact worked to try to
block interworking. Roberto -- the
difference is that the social network operators already provide the APIs, all
we have to do is put the integration layer on top.
There are
a lot of interesting things about social networks not yet being exploited. For example, friendships in "thin"
networks are more valuable than in dense ones.
One limit that has been proposed is "
Lots of
opportunities behind social networks, but you have to appreciate the value to
the user and not abuse them. (e.g. using a user’s
social contacts to send SPAM.)
Security -- the web has brought us lots of
bad stuff (malware, viruses, worms, spam, etc.)
He gave some statistics on rates of infection of endpoints. Users have to assess the risk and be
careful. Google is considered safe by
many, yet Google digs into the mail that passes through it and indexes
everything. Users probably wouldn't like
it, if they thought about it. There are
about 100 Million web sites, 99% of which have unknown security. (Comment -- one of the applications I use,
AVG, can be set up to give you advise on the results of web searches by
flagging sites it thinks are safe. I have
wondered how this is done, what happens if one of those sites is bad, and
whether, for example, the software used to annotate the web search could be
hijacked to instead substitute links to imposter sites.)
HashCash -- a way to prevent spam. In order to send mail you have to generate a
hard to generate hash function, which the receiver can verify with a much
easier computation. Idea is to make it
hard for someone to mass mailing. The
real idea here isn't to figure out whether or not this works, just look at it
as an idea for how web2.0 people think about problems. It's end to end and doesn't rely on anything
trusted in the network. (Comment, and
of course the first thing that occurred to me when I read this is that if I
were a spammer I could use cloud computing, either the stuff offered virtually
for free or the services of the zombie network I created by embedding viruses
in my spam to do those computations for me.)
Identity management.
There are 3 approaches:
-
Centralized
-- Identity provider owns the data and requires users to authenticate
explicitly.
-
Federated
– A distributed solution that allows one provider to vouch for a user with a
service that wants to authenticate him.
-
User
centric -- user provides authentication for himself.
Real problem
-- with 6+ Billion people there's no unique identity. Rebecca said she wound up confused on a
website with another Rebecca Copeland who has written a book on Japan, and the
site owner just said -- that's the way the web works (i.e. there was no way to
keep this from happening)
(Comment – As a user web
search since the first browsers, I have seen significant changes in how
effective searches on a name are.
Initially, if you searched on a name of a person involved in computing
you were almost guaranteed to find relevant information on the person. Then, as more companies and government units
put up web sites a search would get a lot of false hits, especially one on a
common name, generated a lot of false hits.
Now there are so many people with an online presence that this is likely
to be useless).
Someone
in the audience described an incident where a friend who knew how to get highly
ranked on search engines had a picture of one of his children used in promoting
some company, so he retaliated by make "we steal pictures of your
children", come up in association with the company. The message here is that you have to actively
manage your identity with the world, and if you don't and it's valuable,
someone else can attach it to something else and the solution isn't legal
recourse against them, it's retaliation.
(Comment. This is very "Wild West". I don't know whether this is going to
last. Identity theft has become a major
issue when it comes to financial matters, and I suspect that will eventually
extend towards all things online.)
The last
speaker was from Dial2Do. He worked for
"What
if Telcos were really Web2.0 Savvy" -- a thought
game they use often, particularly in conferences:
-
Text
messages might be free in return for them to be open, searchable, and
re-usable. (Like Twitter). The ability to monitor things in Twitter is
really interesting -- Google monitors talk about gmail
failures and learns of most failures that way.
-
Every
time I text a freind or called them it would be/could
be posted to a website (e.g. Facebook) (Comment --
I've never understood why people do this) He
presented Skydeck -- a
-
When I went to call someone contextual
information would be pulled in real time from the web. (Comment -- like "screen pops"
for call center agents? This is a bit
like what has been envisioned by Gordon Bell (Founder of Digital Equipment) who has been
experimenting for years with using technology to record all his interactions
with the world.)
Many
developers approaching the mobile telecom world face culture shock. The internet has lots of tools, instant
worldwide distribution, billing and pricing is easy and your pay is
predictable. In mobile, there are a lot
of different tools and capabilities depending on what endpoint and network you
build for. Nothing is universal,
things are under control of operators, different revenue models, etc.
What do
they want:
-
Help
ease the fragmentation issues,
-
make it simple to get paid.
-
Share
revenue,
-
let
me at the network assets,
-
"Get
out of my way".
Fragmentation
is getting worse, not better -- more and more APIs, networks, devices,
rules. More platforms
in the endpoints and no clear winner.
Android will have a big push, but not all environments will be
equivalent. Devices vary -- capabilities
are not all the same. There are regional
variations and operator variations.
There are different approaches to building applications -- native
applications, widgets, web apps. There
is a blizzard of operator/network APIs. There are some efforts at convergence, (He mentioned one
called OneAPI, but many see that as simply yet
another API to deal with)
Payment
options are extremely varied. We need a solution that is network based,
minimally intrusive to the user experience, and predictable (no surprise big
bills.) Apple has been very successful
because they already had iTunes with 100M user profiles with credit card
information so it made it easy for users to pay for applications. (Comment -- This is probably right in that there's most
likely a high overlap between people who buy music on line and people who buy
applications on line, but you have to be careful how you do things like
this. I've been in a lot of situations
where someone asked for payment using Paypal to make
it easy for people, and half the people paying found that offputting
because they didn't already have a Paypal account and
the extra overhead of having to open a payment account and learn a system they
never used just to pay for something they wanted caused them to drop out.)
How do
developers decide what to do?
-
Is
it easy to use? Is it cool.
-
Who
will be able to use it (reach) Really interesting, Apple has 40M
devices out there, Blackberry has only 10M, Android 1M. Palm less than that. He had some very interesting tables outlining
"application store" characteristics taken from "Getjar", a source of Java applications.
-
What's
the time it takes to use it.
-
How
will users pay
-
How
much money can I get and keep (margin).
Lots of
APIs, things are better than a year ago.
(Comment,
He talked about several efforts aimed at developers (conferences, associations,
standards efforts, etc.), and several times asked who in the audience had heard
of it. Usually the answer was nobody. That's a big problem for the ICIN crowd.)
"If
you want a lesson how to do platform APIs, Facebook
is the one to look at. They have been
very careful about releasing access to prevent new developers from abusing FaceBook."
"If
you want a lesson on how to do the application store, look at Apple. The positive feedback to a developer of
getting their application into the app store is a powerful motivation to do it
and a reason they have LOTS of applications.”
Rebecca
-- the sky isn't falling, there's still scope for telcos. The premise that the web community doesn't
want to come to telco isn't right -- the reaction on
the mobile side is very different to what we had on the fixed side. Her view was mobility was key. (Comment -- I think it's more about the
model being more familiar, a smart client with more or less normal IP access to
the world, not a lot of dumb endpoints you can talk to only through very
restricted APIs.)
This
ended the prepared material. Rebecca Copeland
asked if people were willing to stay a bit longer for a discussion, and most
were. She started by brainstorming ideas
for applying Web2.0 to Telecom.
-
Web
based Push to Talk?
-
Web
based conferencing.
-
Share
my desktop (like twitter sharing).
-
File
transfer between two mobile phones.(Send file to telephone number)
-
Web
based special offers (50% off all calls to china in the next 2 hours).
-
Phone
based identity
-
Extend
bluetooth capabilities over network
I think
the intent was to then pose these services to the speakers as examples and ask
how they would be built in Web2.0, but all of the time was eventually consumed
fine-tuning the list and debating which ones were already there.
The main
conference began Tuesday morning with two plenary sessions.
Dan gave
a brief introduction to the conference.
This is the 20th Anniversary of ICIN.
In 20 years
more than 700, probably 750 papers have been presented. Over that time the focus of the conference
has changed. What has remained the same
is that ICIN is primarily concerned with the service layer. This year 105 papers were received, 53
selected for presentation and 11 were selected for the poster session. (Comment
– ICIN has a unique submission and selection process. Authors are asked to submit extended
abstracts with considerable detail. The
abstracts are reviewed by the 30+ members of the TPC as a whole who give
ratings to the paper. TPC members are
asked to review at least 1/3 of all the submissions and more than a few read
all of them, so most papers get at least 10-15 reviews.)
He first
came in 2004 as a gate crasher. He had a
meeting with the then IAB chair on another matter, they decided to meet in
(Comment – Dan is also a relative
newcomer, first coming in 2004. We
shared a taxi and I was surprised he hadn’t been to some of the earlier
conferences.)
This year had 4 keynotes, probably too many for the
time. (Comment – keynotes are often delivered by high level people who are
hard to get, so like airlines conferences tend to overbook their keynote
speakers. ICIN got lucky in having all
the invited speakers come or send representatives.)
What Telcos fear – becoming a pipe that transports a lot of
interesting services from one end to another with high quality, but doesn’t
substantially participate in the service or the revenue stream. (Comment
– yes, this hasn’t changed in many years, yet it is now roughly the 25th
anniversary of the “end-to-end argument” paper on the architecture of
communications network, which proposed that anything that does not need to be
inside the network to function should be done end-to-end. Several speakers cited the end-to-end
argument paper, but that’s a tough message for network operators.)
What they
want – become the full service store (His visual – the “Colorful fruitstand”) where users go to get what they need.
DT’s view
is that there will be growth in new IP services requiring more from the network
(audio streaming, video, gaming, non elastic P2P, etc. and these will dominate
the revenue stream. The view is that
there is a limited time window when this can be done – architectures are being
decided and if Telcos don’t get there they will not
be included.
To make
this happen we have to bring together two domains, Telco – very conservative,
high quality, high regulation, and Web2.0 – highly innovative and flexible,
rapid growth, very open.
Many
things are being digitalized – money, profiles, pictures, relationships,
etc. How much money from this ends up in
the pockets of the Telcos depends on how much they
can help in this process (i.e. provide privacy, security, recommendations,
etc.)
Supporting
“user centricity” – user profile information is now fragmented, held in many
places. The Telco can be the provider
who allows the user to consolidate and manage this. (Comment
– maybe, but I suspect that the needs here are very diverse and user specific,
and an application in a user endpoint is better positioned to do this.)
The
problem isn’t that we don’t have APIs and software development kits – we have
too many of them and the fragmentation keeps them from being as useful to
developers
Lots of
developer communities, forums, APIs, alliances, etc. Every operator has done its own. The problem is that they aren’t compatible
and don’t cover all the required enablers.
Call for
action—have to do smart standardization – converge our divergent views of the
developer community. Fora
and communities need to work with each other.
(Comment – this is very much the
“ITU” view of the world – we need to create a standard way to do things and
everyone should follow it. It’s not how
the web/internet community has operated – Do your own thing and if you do it
well you will win and others will follow you, if not, then it you will wind up
abandoning it and following whoever wins.)
Question
(Stuart) – How can DT play this role without being
viewed with suspicion from other players.
You are the 800 pound Gorilla.
Answer – size has many measures, by stock valuation there are many who
outweigh DT (He talked about the valuation of Google, Amazon, and others which
far exceed the network operators). Their real hope to take advantage of their relationship with the
end user.
Question
(Rebecca Copeland) – what is the real trust the user has for the network
operator. Isn’t the user today focused
on access and not inclined to entrust their providers with their identity,
profile, etc. Answer – now reach is
defined by their access. Access is not
global, restricted to their structure.
The question is how to become globally available to the customer so the
customer will be able to trust them with more information without giving up the
very important relationship they have over access.
Question
– Asked to see a slide showing Telco and Web2.0 meeting as equal partners. Is this wishful thinking (Short answer –
yes.)
(Martin has an interesting history including consulting in high availability, work at Sprint, and founding an initiative known as Telco 2.0)
These are his own opinions, not policy from BT. Telephony is 70% of revenue, it’s important, but the era of “Minutes” is gone. To survive, the have to replace it and what replaces it is Rich communication in the moment. This happens between users or between users and enterprise, and the role of Telcos is to help make these moments more effective. He views this as the 4th stage in telecom:
He reviewed some history of advertising for BT – in 1988 was all about how communication was now possible for everything and not prohibitive. In 1998 all the focus was on cost competition, in 2008, Telephony no longer appeared in ads.
He presented some revenue charts – voice has peaked in 2008, as has SMS. (Comment – I’m suspicious of that date, which corresponds with the world wide recession.)
He views this as a cycle, new services spawn new needs, which spawn new products.
He played an example of a voice mail to him from the tax collection agency asking him to call them back to arrange to pay. He went through the message in details – a standard “container” (the synthesized announcements at the beginning), a poor quality audio delivering a sequence of business concepts encoded in some arbitrary way decided by the sender, followed by another container (press # to return the call) which does not reflect any context from the message.
The point is this is very inefficient -> you have to decode the contents often playing it several times, it’s a poor user interface, not personalized to him, and it’s inefficient – the person leaving messages has to hand craft it and often gets it wrong. It’s also insecure – they don’t know the message will be received by the right user, and the user doesn’t know the message actually came from the tax agency)
(Comment – he has in fact captured the essence of the bad experience users have with voice mail and menu systems.) BT got 5 UK Pence for this call. He would return to this example later in describing how it could be enhanced to become worth more than 5 Pence to the caller and/or the recipient.
One way of getting more context is making use of the user’s social networking. One experiment they tried was to use a social networking site to interact with customers – they had a very positive reaction to it, 50% of users go on to make a positive comment on the experience in a public forum.
There were some maturity issues: They used Twitter, and at that point it did not support commercial transactions and they couldn’t scale up without violating their terms of use. There’s no security in it. It’s a best effort service -- What happens if Twitter fails, will customer see it as a BT problem? Timeliness is a problem, they can’t react more rapidly than 15 minutes (because that’s the delay in delivering messages from Twitter and in that time the user may give up and call a call center. (If this happens not only will the user be unhappy, but BT will waste resources fixing a problem that has probably already been fixed.
Users have expectations -> The public expects free services and wants free delivery. Customers won’t voluntarily pay for service at a level sufficient to make up loss of revenue on minutes.
Fundamental misalignment with enterprise – BT wants the customer to spend more “minutes”, but spending more employee time on the phone costs the enterprise money (Probably a lot more in lost productivity of paid employees than money sent to BT)
The interaction with customers is getting more complex. Transactions with customers used to be done in person, then
by mail or by phone. The internet is
something quite different – it’s a meta medium that
spawns new ways of interacting with customers daily. (E.g. things that used to be done be email
might evolve to websites, Twitter, Facebook, or
whatever else comes along). This creates
a problem for enterprises because they can’t keep up with the number of
different ways can get to users. Users
migrate to channels that are easy to use.
- Have to use existing channels better – unlimited usage plans, better packaging (e.g. send a structured SMS with embedded links for “click to talk”.)
- Have to improve their channels.
Returning to the example of the bad voice mail that started his talk he gave five ways to improve:
- Insert, update, and delete APIs for messaging. These would allow messages to be updated (e.g. remove a message that has been addressed otherwise so the customer won’t get a summons to the tax office if the tax payment is made before the user picks up the message).
- High definition audio so the message is clearer. (Comment – maybe, but my experience is that the problem isn’t so much the quality of the message but the fact that the person leaving the message wasn’t prepared for it and scatters the key information around without being careful to highlight it and make it clear, and awkward user interface given to the message retriever who usually can’t figure out how to back up a few seconds to catch something and has to replay the whole message, often multiple times.)
- Personalized interactions – don’t send SMS to prepaid users or call people during the wrong hours. (Comment – Very timely, the week before the conference my prepaid phone was beeping early in the morning to deliver a SPAM SMS from a company I will never do business with as a result)
- Smarter containers – Allow the sender to determine how the message is presented. (e.g. give a specific number to return the call so the user has the option to do it automatically rather than having to retrieve a phone number from the message contents.
- Multi-modal communications.
How? “Twitter for business?” Include things like multimedia, federated identity, real-time, security and authorization.
Another idea, creating new channels for customers. Today, people have mobile phones that generate a call log. He predicted that 5 years from now there would be a major business in “sponsored links” on your call log. (Comment – am I the only one who finds this downright creepy. I don’t like the idea that companies are paying to get my attention and by doing so blocking me from the things I want to control.)
He went back to his example of a bad voicemail and how much the telco could collect for doing things like HD audio (10 cents) up to payment completion (10 Euro).
He proposed there will be a global race to become the global business platform that solves these problems.
His focus is on mobile broadband, not voice. This was a deliberate choice. He feels that Mobile Broadband has changed
the way we communicate and that’s where the future is. The communications industry is in the middle
of a perfect storm -> Broadband+Web2.0/3.0+smart platforms+social
networking. (Comment – I agree with this, I feel ubiquitous mobile broadband plus
smart user devices change the game).
He showed a roadmap for mobile broadband availability. Lots of options and variability, but all have the fundamental characteristic that much more bandwidth becomes available to the user.
New smart phones are fueling growth of video. (Cisco says video is now 40% of all internet traffic), and data use will double each year next 5 years) (Comment – this one is surprising to me, but I guess it’s a market for younger eyes)
Application stores are becoming very popular – every vendor is following Apple.
Cloud Computing is another key piece – it allows people like Amazon to offer lots of value “over the top” and disrupt the business models of others.
These disruptions create opportunities for everyone, some can be monetized only by the carriers.
An example -- Mobile Video. In 2000, The total internet traffic was 25 Petabytes a month. In 2008 YouTube and Hulu alone generated 50. This is the killer application. The endpoint that this bandwidth is delivered to is increasingly going to be the mobile phone. It is an opportunity as well as a problem: How to grow the delivery network fast enough (Comment – 2 years ago I was involved in a study of mobile data and one surprising finding was that many areas have severe bottlenecks in the distribution networks between the radio sites and the network that are not easy to address.)
Today, people are watching videos on free sites and generating no incremental revenue to the carrier but lots of challenges in delivery. Carriers have to support it and have to figure out how to do it with good performance and low cost. One technique is the use of content delivery nodes that cache the video as close to the endpoint as possible. Once in place those nodes are a place where the carrier can add other value either for the end user or the content owner.
Lots of this content is in small bytes “Snackable media” – 1-5 minute clips that are consumed in minutes of boredom. Content providers have discovered this and are customizing content for this kind of application. One way to monetize this is to partner with people doing this to provide it to mobile users smoothly with better quality than achieved by “over the top” downloading. User might pay for this.
Another includes customization and profiling (user statistics to the businesses, ads to the customers.)
Application stores -> Apple just announced they have collected $1B on iPhone Apps. The service is only 14 months old. There have been 2 Billion downloads. There are increases in both the number of users and the usage per user. The average user downloads 10 applications per month, at a typical cost of $10 (Comment – there was a lot of inconsistency in the data I heard on iPhone application downloads, so I don’t know if this was $10 per application or $10 per month per user), and this is growing. (Comment – amazing what people will pay if you provide them value and package it right.)
Carriers will produce multi-platform application stores which serve users with different platforms. The key differentiator for iPhone today is ease of use – Apple makes it easy to reach iTunes, then navigate and buy something. Carriers are following this model.
Instant gratification is important. Users have to be able to buy, download and activate easily and quickly.
Another opportunity for the carrier – “Clean pipes” The idea is to guarantee to exclude malware and attacks. There are two types – from the network and from the end user.
Mobile phones are different from PCs. Users essentially lease the phones as part of a contract. It’s not clear whether the user will accept the PC model for security (e.g. You are on your own and you download security software from 3rd party). They may be more willing to buy secure access from the carrier.
Opportunities in Cloud Computing – Carrier can provide their own cloud computing service (either do it themselves or “OEM” it from someone). (Comment – I find this one fascinating, in the late 1970’s AT&T was developing something very much like cloud computing and hosted applications for small enterprise, and the effort was a failure because there simply wasn’t a price point low enough to compete with buying your own computing and applications that would pay for the service infrastructure. I’m not sure whether enough has changed to change the conclusion)
The other opportunity is to sell
A third opportunity—cloud switching. Look at cloud providers as generic, and provide an integration point that collects applications from different clouds and integrates them for an enterprise that wants “best of breed” but doesn’t want to do their own integration.
In his view the key way in which Web2.0 changes how we communicate is that people participate in both producing applications and content and consuming it.
Today you can buy a grocery list application for $4.99 (This is the top iPhone download!) $2 of this goes to Apple. This is a generic application with nice graphics, but you have to do everything:
- update the list in response to messages
- set reminders
- find a store with the items
- find the best price for everything on the list.
- Find coupons
Shopping experience will change with “Web3.0” Web3.0 is all about personalization. The grocery list will be not just a list but an application, it runs at all times and you can customize how much control you give it. (e.g. allow or deny it the opportunity to autonomously connect to the network, buy things automatically, communication with other applications, etc.) (Comment – this is interesting, One of the things I do wonder about the iPhone phenomenon is how much downloading is fascination with the novelty and how much is sustainable. Giving an application the ability to manage your grocery buying is something some folks may want, but others surely won’t)
He gave a history of digital consumer electronics starting from first CD player in 1982. 20th century was mainly about digitizing the information, 21st was more about networking it. Digital TVs, Digital Cameras and DVD players have exceeded the number of PCs. (Comment – I wonder what how many of these are cheap cameras.)
These devices drop rapidly in price, and the price drop is accelerating. (2 years for plasma TV to drop by half) (Comment – this didn’t seem particularly surprising to me. Most consumer electronics have followed more dramatic price drop curves. In fact I suspect TVs have fallen more slowly than many).
As the complexity of LSI devices has increased, you can put more and more into one device – multiple recording channels, networking, program guides, etc, but at high development costs.
Consumer Electronics are being threatened by lots of factors – low cost manufacturing, substitute products, mass retailers with lots of pricing power, etc.
Home networking is finally starting to take off, driven by HD connection within the home. It is still doing low speed services (Comment – I can probably find a chart from about 1980 showing the same thing, but it took a long time for it to become really practical)
Current situation:
With increasing competition, networking is very important since most
interesting services delivered through networks. He showed an interesting chart showing average
broadband speeds in various countries. Many countries in
He talked about video services in
What about 3D? –
In the future, he anticipates being 3D without the glasses (not sure what technology)
He had a timeline for evolution of TV – analog resolution, semi-HD (6MB), Full HD (12Mb) 3D (24Mb), and multi-view 3D (100+Mb, which he doesn’t see being able to broadcast at this rate so it’s all network delivery).
Shifting perspective a bit he put up a slide on the problem of CO2 emission reduction, Lots of effort has been put into making devices in home more efficient. The return for that effort is decreasing. He presented the future as being in addition adding energy generation and storage at home with networking to control and optimize it all. (Comment – this was all interesting, but I’m not sure about the claim that opportunities for conservation are diminishing. In the US I expect energy consumption in the home has increased, in part because of the number of consumer electronics products in use and the number of them that consume power continuously vs earlier generations which were not “smart” and did not need to be powered when “off”)
The future is to network smart homes and enterprise into a smart grid – it’s all about networking.
John O’Connel from HP was the session chair. The keynotes have already highlighted why it is important that we establish the network as a platform. This is not an obvious choice for all – some view the web as the platform, some view the cloud that way, for others it’s the endpoint. The value of a platform is directly linked to the number of applications. (Comment – I was in the platform business for years and strongly agree here. That’s what’s a bit scary about the iPhone phenomenon, and also what argues that endpoints with lots of applications are going to win this position)
Network Virtualization allows resources (computing, storage, communication, data, etc.) to be gathered as needed independent of location and used together in an application. The key problem isn’t that no virtualization exists, it’s that what does exist isn’t adequate and we need more focus on it. In the past we talked about “Service Oriented Infrastructure (SOI). This has now been subsumed under “cloud computing”. Most important SOI realizations are computational clouds. – huge data centers, managed computing resources, and mainly focused on “batch” processing with no timing and location.
ISONI (Inteligent Service Oriented Network Infrastructure) – a more telecom style cloud computing – smaller networked data centers that allow applications to be “nearer” the requester. ISONI manages heterogenous resources including process, storage, and transport. Provisioning is much more complex for this because of variety and distribution.
IXB (ISONI Exchange Box – provides the underlying network virtualization that allows the requested network configuration to be mapped onto the physical network securely and with QoS.
He covered a lot of detail on how the mapping was done in particular with respect to achieving QoS.
They provide support for live migration of applications between computing elements, which they feel is essentially in delivering a “Telco quality” service.
She talked a bit about how to do multi-party multimedia conferencing us IMS. The building blocks are signaling, media handling and conference control, an important aspect of which is controlling the floor (controlling what participants can do).
Development needs APIs – Standard ones including Parlay and Parlay X, JSR 289 (Sip Servlet) 281 (Media) and 309. Also have non-standard APIs they used for Floor control.
Only two generally available toolkits for IMS – Ericsson SDS and Fokus Open IMS Core (Comment – not sure why only two, I suspect that these may be the only two available for free.)
They implemented a game “Capture the Flag, using an existing networked game application. The existing networked game had no communication capabilities. They also added a capability to bomb an area which requires the team to communicate to decide where and when to use their one bomb.
They built their own API for floor control – two floors, one for the game one for the bomb. It’s all based on XML messaging between a client in a cell phone and the application running in a network server.
Each participant has a SIP session with the game server, which instructs the media server to connect to each of them in “receive only” mode. A user who wants the floor makes the request to the floor server, which tells the media server to open that user’s connection to bi-directional.
Their high level API saved substantial coding effort over the raw use of JSR 309 and JSR 289, and no significant degradation in performance.
Conclusions
-
Lots of interest in gaming (when presented in
- Many shortcomings for APIs for IMS in supporting gaming.
- Separating Floor control from Media Server helped make it doable because no media server now implements the capabilities needed for floor control on it’s own.
Roberto is a long time participant in ICIN and always has interesting things to say. One thing he said is that he is currently studying for a Ph.D. He is trying to broaden himself and update his views.
He gave slides on pricing and usage (From FCC and other sites). The good news is that Voice and Broadband are growing. The bad news is that prices for everything are dropping. Paradox: The more the network is used, the less revenue you get.
He cited a book on Scientific Revolution from the 1970s that said that when a theory reaches it’s limits paradoxes appear that can’t be resolved in it, and it’s time to change. He sees the revenue and cost paradox above as an example of what’s going on.
He tried to apply what Martin Geddes was saying to his
market: In Italy 50% of companies are not on the web.
Almost nobody uses network voice mail.
There’s no economy in better messaging because call centers have been
outsourced to
He presented a view on telecom strategy – move from data and services over voice to voice over services over IP. He presented the IMS architecture with APIs, and made the claim that no web companies use them. IMS violates the “End-to-End principle that drives the network.
So, what do network operators do: Create walled gardens, or be bypassed “over the top”.
How many APIs are in the Web? Thousands, and nobody wants to reduce them. They are proprietary to differentiate themselves. Developers decide which are worth using.
Web companies are global – they can be reached everywhere. Telcos are regional – accessible only to their direct customers. (Again the message was that global accessibility is much more attractive to application builders.
Why Web companies are not going to use Telco SDPs:
- They already have their own.
- They don’t see the Telco as providing value for them.
- They see the Telco as too slow to respond.
- They have their own business agendas.
What can Telcos do with their SDP that others can’t? Maybe it’s spanning administrative domains and technology.
What else can Telcos do to participate:
- provide data – Telcos have lots of data that is of value to others.
- He gave an example showing “Skydeck, which builds a social network display off your call log report.
What’s the way to evolve – “the web is the platform”, don’t try to replace it, accept it and work with it, and focus on providing more valuable data to it.
Suggestions:
- Skinny down SDP and focus on crossing borders.
- Differentiate consumer and business market segments
- Move on to a data approach (leverage it, defend user privacy)
- Find new ways to challenge new competitors (P2P)
- Look for opportunities in the “Internet of Things”
After this the floor was opened to questions to all of the speakers.
Question to Marcus – You said ISONI did dynamic management, but what you showed was static. (Answer, he didn’t show it all)
Question to Roberto – All Telcos are facing the same problem and all are trying to solve it themselves – why not go open source, let the developer community define the APIs.
(Answer –Customization of open source platforms today prevents portability – not a good way of generating a powerful platform with lots of applications.) He showed repeated reluctance to putting Open source into products because of poor experiences.
Question (Chet McQuaide) – how should Telcos monetize data while overcoming laws regulation privacy? Answer – Telco first has to be sold that this is a good idea, once sold the company needs to decide what will be opened and how to cope with local law.
Interesting example – often there’s a dispute over how many people show up for a political rally, the Telco operator knows at least how many of them had active mobile phones!. (Comment – he presented this as an example for Italy apologizing that it may be unusual, yet that specific problem has been hugely controversial in the US.)
Question (Dave Ludlam) – why, with all the effort we have poured into Parlay and other APIs, do we find they are inadequate for what developers want to do? (Answer – it’s not that it can’t be done, it’s that Parlay is hard, and you can spend a lot of effort on low levels things not of use to you.) (Comment – the problem is the Parlay group focused on how to expose everything, which made it complex. Most of the time the developer doesn’t care about most of what’s in the API, so using a higher level but narrow API can be a big win, provided you can get to the whole thing if you need it. Roberto actually gave an example of this sort in response.)
Question to Roberto – what are the major concerns of TI Management in opening data from the network? What would you do if you were the boss? (Answer – give it a try, get some lawyers and technicians together to figure out what’s possible and what can be released without compromising the user or the law. Then try it with a Beta test to see who would be interested Many operators have no clue what they have, first you need to understand that, then figure out what to expose.
Rebecca Copeland chaired the session – Services are why we are here, but time and time again she gets asked “what’s the killer application?” What justifies the investment in IMS?. One place to find innovations is in the cracks – between applications and devices. Another source is blending of applications. How can I combine what today are separate services into one coordinated service?
He introduced the notion of evolution of how people contact each other. First there was the phone, then there were “click to call” solutions, and now we have social networks that define connections. Current web solutions have lots of limitations:
- Interoperability. One application doesn’t interwork with another.
- Adaptability – difficult to adapt to specific situation
- Reachability. It is difficult to figure out how to make yourself reachable without losing control and allowing people you don’t want to talk to too much access.
They have built an application that which provides hyperlinks that users can click on to reach them, which implement the actions necessary to initiate a call. Because they are implemented in http they can be passed around. The link hides your real phone number so you don’t have to expose it and you can disable the link if it is lost or abused. The caller gets to determine what kind of connection (time, nature, where it goes). This can based on real time information, like time, schedule, state, etc. They have a trial service with 700 users using this. They have feedback that users find it interesting, but a bit confusing They are working on improving the user interface, and moving toward supporting full mashup capability to allow this service to be combined with others.
This presentation addressed services that could be implemented by combining GPS with a low performance radio data service (RDS) for communication to vehicles. There was considerable discussion from the audience about the usability of RDS and it’s adaptability for this as it is apparently an older technology with both technical and regulatory limitations on its use. Some example services:
- Real time resolution of best fit – the location intercept by GPS of different vehicles on the way between two vehicles.
- “Tell people where I am” service – The customer for this service might be a salesman, delivery service, or repair service where customers want to know whether the person is and when they are coming. Someone who is expecting a driver calls the person without interrupting them (because the driver’s service knows they are on the road based on GPS information relayed to the network over RDS) – an IVR can respond where you are.
- Traffic regulation – use vehicle location to interact with a central server and download location based regulations that the car enforces (e.g. no honking near a hospital.) (Comment – this is interesting, but I don’t know how acceptable it is.)
- SOS service (like OnStar™ -- the vehicle calls for help when an accident has occurred and uses GPS to automatically describe where it is..
- Parking finder – Vehicle asks for availability of car parks in the area. (Comment – the hard part is collecting the data that determines where the spaces are automatically, systems which track spaces in parking garages are often wrong)
The implementation uses GPS with two way communication (RDS) and user and service profiles for information that the services need.
Question – how do you implement the upload path. Apparently it’s a recent addition to RDS not universally accepted. Another problem is recognizing the sender, how to you validate it? You use vehicle tags that make it specific to a particular vehicle
Question: How do you manage the load for the upward traffic? You can use different frequencies
Question: How about
security? (Answer – we need it,
especially against controlling functions on the car.) (Comment
– this generated considerable discussion.
The notion that a system would automatically keep you from exceeding a
speed limit or honking in a “quiet” zone was acceptable to some and not others,
but even those who found it acceptable were concerned about security)
The RDS Forum has recent information on the technology. “RDS looks like the beginning of SMS, simple and robust. Not known if it scales that way”
Idea – use the camera in a phone to take a picture of something, upload it through your data connectivity into a network based search engine, and let it find things relevant to the object you display. This is really a framework for a lot of similar services.
Motivation – this will increase use of the mobile internet. Two challenges – many mobile users are hesitant to use mobile internet services due to pricing or usability of mobile devices. Telcos are in search of new service and business models (Comment – yes, but this strikes me as something that doesn’t need to be done by the carrier, so I would expect strong competition from search companies)
There are lots of things to take pictures of – advertisements, bar codes, catalogs, etc. Each kind of thing may lead to unique service opportunities:
- Mobile shopping – snap a picture, use visual search to figure out what it is, get details on the pricing and eventually more.
- Catalog shopping – point your phone to a catalog page and an application recognizes what you are pointing at and gives you more information about the object.
- Mobile advertising – snap a picture of an advertisement for something that interest you and you can get more information on it.
The service is implemented using a lightweight client that uses the device camera, and some simple image recognition elements. A request is then made to a server back in the network. The server has a more complete image recognition piece and a visual broker to determine what kind of service is being requested based on what kind of “thing” was photographed
They have a simple implementation on a Symbian phone (165Kb code) They can recognize 1D bar codes, recognize simple objects, and scan pages of text.
The bar code recognizer doesn’t need to take a picture, it runs in real time with stream from the camera. The Image recognizer runs in the server (requires taking a picture and uploading). They claim 90% accuracy for recognizing posters and CD covers, two kinds of objects they tested it on.
Page recognition uses a 2D code on
the catalog page to identify the page and from there recognition of exactly
what the user is focused on takes place on the device.
Image recognition and visual search can be attached to other services to become an enabler for service scenarios.
Question (Chet McQuaide) – could you do the bar code recognizer as an iPhone application? (Yes, it’s been done).
-- Could you in the future capture images from video (i.e. point the phone at a TV ad to buy the product.) -- There are startups working on this.
-- Could you use this for
information for travel or for objects in a museum? Yes you could, but GPS will be better for
some of this. (Comment – I saw services
like this many years ago, but I think image recognition would definitely be an
enhancement because GPS location isn’t always enough to determine what you are
looking at and museum objects are occasionally moved around.)
Rebecca Copeland opened the general question session for all the presenters by asking the audience -- Would you buy these services? What questions do you have for the authors?
Question on hyperlinks – I can see the value for some people but am skeptical about whether this is workable, this has some problems of callerID. Partial Answer: With the link, the callee may not know who is calling, but they know why because of the link the caller activated. (Comment – this is interesting, but of course also subject to abuse. People can cache links and use them later. People do things like this today with their addresses, one of my friends in graduate school used a different middle initial for each new place he was asked to enter a name and address and used the result to figure out how someone found him and who was sharing his contact information. Many people use “shell” email accounts for dealing with dubious on-line sites that require an email so that they can trap SPAM sent to those accounts without affecting their real email.)
“I wouldn’t trust giving any
central service the ability to control my car”.
Have you studied the social acceptability. (Comment
– the questioner was German, the speaker was Indian. I suspect that there is a very strong
national/cultural component to the acceptability of some services like this.) The control could be passive – a flashing
red light to tell you that you are exceeding a speed limit. (Someone else suggested many accidents in
David Ludlam introduced the session with some questions: What complexities arise in implementing these services and how have the specific service architectures (NGN/IMS) addressed them.
She has headed a group on services in Telefonica since 2006 and has represented Telefonica in various standards efforts including TiPhon and 3GPP.
The current scenario is the big bang of internet services – explosion of services, moving from Web2.0 to 3.0. Services work on individual networks – they don’t necessarily work across boundaries.
ETSI launched a project on interconnection of IP Services in 2008, the goal is making interconnection really work, including QoS and Billing across networks. Some of the results include:
- Try to avoid using the public internet for interconnection due to unknown Qos, Security issues
-
The notion is networks interface at border nodes that
have to address seven issues (Security,
- IPX (IP Packet Exchange) is the natural evolution of GPRS Roaming Exchange. This provides support for a lot of activities, including Outsourcing, which allows companies anywhere to interact through secure networks.
She went through a lot of requirements for just what has to
be interworked at different levels. (Comment – this was a detailed presentation
of what seemed to be a quite complex solution, what I would expect of a top
down standards driven effort. The paper
has much more detail)
Web linkage is about two users doing a shared application together. The network provides the information needed to connect the two. (The example he gave involved a web browser with a phone connected to another instance, but I think the intent is to be able to share a a PC based application with a smart phone.) They can share the screen and cursor.
3 Technologies for this -> Session information acquisition, Session information verification, and then Session information federation.
The architecture used IMS to connect the browser to the
session linkage server (I think the web server connected directly). “Parlay X is a set of web services APIs for
the telephone network, easy to use” (Comment
– I highlighted this statement because many speakers in the first two sessions
talked about how Parlay X was not as easy to use as other approaches)
He went through the session establishment process using Parlay events and states.
They built it on OpenIms Core, and verified that it would work. (The presentation visuals were a bit too detailed for me to get the big picture as to exactly what they were focused on doing, and what the IMS really did.)
This talk focused on how to access multiple devices behind a single network endpoint for use in applications. IMS can be used for remote service access, but this creates addressing issues when you need to access many devices at a single endpoint – it’s not clear how to name them. Service Examples Include – turn a lamp in your home from your mobile phone, or get a video feed from a security camera. Print an MMS, listen to MP3s from your home library. (Comment -- You can do all this with smartphones with the right application, it’s not clear how IMS helps this.)
The solution proposed here was remote access via IMS – use your mobile phone to get to the home gateway, then contact devices in the home network – there’s an address problem. Their solution is to use a network server (presence), and UPnP in the home environment to help the phone discover and talk to devices in the home network. They propose to use a new concept, wildcarded public services identities The wildcard allows the phone to look for devices of some type in the home network. The application server registers to pick up the wildcarded addresses and resolves a request sent to it to a specific device registered from the home network.
An advantage here is that the wildcard PSI address survives
sessions with the specific device and can be used in presence registration and
other persistent services. Second, no
operator provisioning is needed to do this.
(Comment – these are both
appealing traits.)
A problem today in a multi-media call is that each device must maintain media synchronization independently. 3 solutions:
- Master/slave model – make one media the master which establishes timing, and all others have playout synchronized to it.
- Maestro model – collects the output time of each media flow and decides the output time of subsequent flows using intelligence.
- Distributed model – each device acts on its own.
(He gave a very complex explanation of how it all works, too complex to follow in a short presentation. The paper contains the full details.)
He gave a test network they used for this – IMS control components (P-CSCF, HSS, session control, conference server, plus a two layer distribution network.
Data on degree of synchronization showed it worked.
Following this talk the moderator opened the floor to questions for any of the speakers.
Question on home devices – why not just use HTTP to address devices at home? Answer – this requires an internet connection to the mobile phone and an HTTP client on the mobile phone. Followup: But you need IP connectivity just to do SIP to initiate an IMS session, HTTP is simpler. Answer: I guess so, but you need more protocols to use HTTP directly. (Questioner had lots of follow ups on the specific architecture, what needs to be registered and known by the mobile network.)
Most of the remaining discussion concerned the paper on addressing. I think the others were interesting, but the addressing paper was a concise and easily understood service element.
This session covered the history of the conference in conjunction with it’s 20th anniversary. As it turns out I have been to 8 of the 13 ICIN conferences. (I actually had a paper accepted into a 9th conference in 2001 but was not able to attend)
For the first 8 years and 4 conferences, ICIN was mostly focused on the Intelligent Network, starting with ITU IN and mobile IN in 1989, adding AIN, Broadband ISDN and Service creation in 1992. In 1994 TINA (Telecommunications Information Network Architecture) and CAMEL, and in 1996 more services (intelligent agents and number portability were key focuses._
Starting in 1998 (The first year I went) the internet arrived, and by 2000 we had significant work on VOIP, along with OSA/Parlay being key focus. By 2003 IMS became a key focus and continues to be. Video began as a key focus in 2006, with Web2.0 and Peer to Peer being added in 2007. Grid Computing and Cloud Computing have been added in the last two conferences.
He did a comparison of the agenda in 1989 vs 2009. Mainly IN material was dropped and Next Generation added, but there is a core that represents about half the conference devoted to architectures, platforms, and services that appears in both (Though what was presented in 1989 for architecture was significantly different from what is discussed in 2009)
He closed with some humorous predictions made in past
conferences, such as the relatively low penetration of
Jeaan Poufet
was the initiator of ICIN and organizer of the first few conferences and
remains active with the planning of the conference in his retirement from
France Telecom. He is largely
responsible for the existence and persistence of the conference and obtaining
the support of
He gave a brief history and described some of the singular
bad timing which has affected the conference in this decade and clearly caused
a decline in attendance. (Starting with
the 2001 event held shortly after 9/11, many of the events took place during
international events that interrupted travel – the start of the
At the end Stuart Sharrock
presented Mr Poufet with a
special wine -- 1989
Max is a long time member of the TPC, and will chair the program committee for ICIN 2010. He presented a view of what ICIN is all about which he developed in conjunction with the TPC. He presented some view of the future, which will be broadband in fixed, nomadic, and mobile incarnations. The future includes permanent high definition voice, image, and perhaps other senses (touch, smell, temperature). “8Mb/s each for Billions of people.”
One of the bigger changes he hinted at was that “Green” technologies and applications of networks and services for sustainable developments may become a very important part of ICIN. Several papers and presentations this year addressed application of communication technology to emission reduction or sustainable development.
Another interesting theme for the future was “Caring for people”, including both supporting communities and social interactions, and application of communication to improving healthcare.
Others were more predictable – security, privacy, and support for open commerce will continue to be important themes. “The internet of things” (Comment – I first heard about this as a theme at least a decade ago, maybe as much as 20 years ago, with the launch of MIT’s project Oxygen which was a focus for research around a future in which billions of every day objects would be networked and communicate)
Stuart Sharrock made the honorary membership awards, one to a Japanese professor who has served for years on the advisory award, and one to Steffan Ulner who is a long time member and former chair of the TPC.
Dan Fehrman presented the best paper awards. (Comment – this is always a very difficult process, especially for ICIN, which includes papers which cover a wide range of topics and follow a wide range of styles (research reports, case studies, business perspectives, etc.) The committee had a very difficult job in doing this selection.)
The best paper on service enablers was the paper from Session 1 from Roberto Minerva and his colleagues presenting the web as a platform and suggesting a shift in focus on what is the SDP and what role is best played by the telecommunications operators.
In the new service and applications area the award was made
for the paper on Visual Search. This is
a key topic in
In the business area, the paper on the Ikea effect was selected. Unfortunately the author could not attend due to an accident involving a family member but the paper will be presented by Dan. The paper is thought provoking and a good choice.
Michael Crowther from BT chaired this session and introduced the session.
The speaker was unfortunately not the primary author who prepared the presentation and could not attend so the opportunity for interaction was a bit limited.
Cross provider service aggregation is becoming much more important as operators look to compete with complete packages. (Comment – I guess this is the equivalent of telecom operators bundling satellite TV to compete with Cable companies providing phone/TV/Internet packages). He presented a model showing several services that become bundled over time.
The architecture he showed included an aggregation agent and service agents running in the bundled environment. Each service agent was a proxy for an individual service which may run in another domain. The Peer SAs act to coordinate the services using an inference engine. (Unfortunately the presentation was quite difficult to follow and detailed. The paper contains a more complete description of this concept. What appears to be the case is that they have a service orchestration system using an inference engine and negotiations between agents for different services that control what happens when multiple services are active.
An example given was what happens when a user receives a call while watching TV. The coordination software can present a call alert via the TV and give the user the opportunity to suspend the TV and take the call. The inference engine has production rules that operate based on what services and devices are available that determine what happens (for example diverting the voice to VoIP). The rules seem to be based on the user’s past behavior in similar situations and adaptive. (Again, it is unfortunate that the speaker could not address questions.
This was joint work between two DT research centers. He talked about the past prediction of ubiquitous services to the fact that we now have that, and described a bit about how people use their smart phones (Continuously!) One thing that distinguishes smart phones is that in addition to telephone calls and information flow into the device, it allows the user to produce information which goes into the global net (blogs, posts, recommendations, etc.)
There is a lot going on in the background behind the device including personalization through profiles and context, but also “recommendation” (some people might call this advertising).
He went through a usage scenario. At home the user might want public information (weather, transit info, etc.), but also access to the workplace. In transit the user might want to purchase a ticket, as well as getting information and news. Time is limited and context might include how much time is available to view a video clip.
In the workplace the device access business information. Leaving the workplace you would be interested in information, but perhaps personalized shopping (lists, information from family on needs, coupons, etc.). The return trip and arrival have similar characteristics. (Comment – one thing missing here is expense accounting, which was always an important part of how I used a laptop during business travel.)
There are many inputs to the device, including schedules, location, date/time, profile, public databases, etc. Service composition might look at your schedule and determine what you may be interested in (for example if it includes travel, do you need schedules, for attending a play, do you have tickets, how will you travel, and do you want to invite friends to join you.)
He went through what’s in the user context, including demographic information, but also personal information (interests and needs), community information (social networks, address books, memberships). There is also dynamic personal information (schedule, condition, etc.)
There is also a lot of situational information that isn’t personal (time, location, resources available, databases, environment (weather), and social information (where your friends are). Some is static and some dynamic.
Using all this information you can generate triggers that will suggest what the user may be interested in at different points in time. You can also have event specific triggers (e.g. flight cancellation) that will suggest something of interest.
“Casou” – Context Aware Service Offering and Usage. -> This is a system which serves to present the right application to the user at the right time. The example shown ordered the Icons on the display of applications on an iPhone (or similar phone) so that the most relevant ones appeared high on the screen. (Comment -> It’s interesting that our devices have become so complex and our applications so numerous that simply ordering the way they are displayed becomes a valuable service)
At this point they switched speakers to the second author, Karl-Heinz Luke, who presented the “ring and ride” application which allowed the user to buy tickets for transit through a mobile phone. The service is location based (knows what transit mode you will be interested in). One of the interesting things was the use of a 2D barcode to verify that the user had bought a ticket. The display could be scanned by a train conductor to determine what ticket had been bought. This application runs on any mobile phone using only the GSM cell ID for location.
Combining this with the context environment (Casou): He showed various phone displays which prioritized the applications according to context, and gave information on how long a user had before he had to leave to make a train. At the station you had the opportunity to buy a ticket electronically, also had the ability to locate friends.
Question (Chet McQuaide) -> This is a personal assistant, others that he has seen had two problems: Information Overload, and too many presumptive judgments. Answer: The only answer to overload is to be very careful about using your profile as a filter. The presumptive judgment problem is addressed by giving the user control over how much automation he or she is willing to allow (how much will the system do behind your back).
This is joint work between teams in the
He began by talking about the complexity of the problem of personalization, you have to respect user privacy, but you also have to be able to know the user well enough to add value.
Their work focused on generating profile information based on a user’s activities. The idea is have specific information on the user as an individual, not just a stereotype (market segment). The profile could then be used to enhance the user’s experience with other services
Guiding principles -> create a holistic profile across domains. This keep the user in control (respects privacy). The environment where profiling takes place includes both controlled (by the operator) and uncontrolled (e.g. public web sites) domains. In controlled domains you have access to metadata behind the content presented and can use that in your profile (For example data to describe the purpose of some service or what context it applies to), while in uncontrolled domains there’s no such data and it has to be built by inference. (e.g. given a repeated pattern of visits to a site before some other activity one might infer the site had something to do with the activity.
The user model here includes semantic concepts (both for interests and content), and weight to semantic concepts (e.g. how much of a particular type of content is consumed).
The inference process starts with raw data (logs, CDRs, then extracts consumption information which it uses to update the user profile information that can then be used in other applications.
Keyword inference is an important part of uncontrolled domains. It looks at the user’s browsing to extract information using deep packet inspection to extract URLs from the session. Then Session Concept Inference determines which keywords are important and produces a weighted model.
Example – user surfs the New York Times website. Packet inspection finds the URL and passes it to keyword inference. As the pages come up they find the links in the pages to other websites (Comment -> From the example shown I can see the problem – what this will tend to find is advertising, which may be totally not of interest to the user.)
Session Concept Inference. This uses a word network that structures the session concepts and keywords together with information from Wikipedia (Synonyms)
There is a lot of potential for bad inferences. They gave an interesting example in which a financial news page was analyzed and produced the concept “Action Movie”, because the page mentioned “bond”. Their session concept inference concluded this was in fact the wrong context for that word. (Comment – again I am reminded of the “law” “99% of everything is crud”, meaning that any attempt to analyze information returned from commercial web sites will trap an enormous amount of information that irrelevant to the user’s reason for going there.)
They have built this and are demonstrating it in their “showcase” rooms. Continuing work will test this against real consumer data in a mobile environment.
Question – Will users actually allow this kind of deep inspection of their network activity? Answer – the service must show value and it must be “opt in”, forcing the service to demonstrate value. (There was a lot of audience discussion on how the user will benefit, and how this compares to current services that may violate user privacy, like Google indexing information from gmail or selling information on your searches to 3rd parties without your knowledge or approval.
Question: How do you insure that you associate with the right person, the PC might be in a shared location with multiple users. (Good question. For mobile this is not a problem, the assumption at least is your mobile is personal.) (Comment – not necessarily. In my experience at least people often borrow another user’s mobile device either because of battery failure or because of capability limitations, etc.) The answer here for fixed was a bit vague. For services that involve login you might be able to infer who is using the device (Comment – but you don’t know who just pulled up a Google search on a PC in the kitchen), but it will be problematic for some devices (how do you know who is holding the TV remote.) (Comment – in another century I once worked for AC Nielsen, the TV ratings people, and one thing I remember was that in households they monitored they wanted people to identify who was watching. This was probably actually easier when information was generated by off-line logs than it is now when users expect instant behavior and information is collected live)
General Comment -> One threat to privacy is that legal changes may force operators and corporations to share profile information with governments, so keeping profile information only on user devices, though even that may not be enough.
Max Michel (Orange Labs) gave the introduction.
The presentation was based on experience in developing IVR and IVVR based services for Asiana Airlines.
He went through what IVR and IVVR is. IVVR is essential IVR + Video, Interactive
response using both Voice and Video. One
issue is that unlike IVR systems where there is lots of human factors response
on design (Comment – yes I know this, but
why do companies so consistently design IVR services that are frustrating and
unusable?) There are few guidelines for IVVR. Video Calling is not unusual in
Some example IVVR services in
Asiana airlines is the second largest airline in
The IVVR service had the same requirements for functions, though they added some including functions for their frequent flyer members (inquiring about mileage usage and status). The access is the same phone number as the IVR number except that you select “Video Call”.
He showed the architecture, which included both IMS and IN elements (An SCP was being used to implement the routing for the access number. The IVVR system they have is basically a high capacity media server (From Avaya if I heard him correctly). It uses dual SIP access (PSTN comes through a media gateway) to a server farm including a web application server, Text to Speech, and Voice/Video media servers.
The service includes 4 video clips, 320 images and 230 recorded voice files. The logic is VXML and Javascript. (Comment – Interesting, “Video” is in fact a misnomer here and mostly they are using a video channel to display a still image like a schedule board. Perhaps the reason for it is that video capable phones are so widely available and it’s easier to dial a video call than navigate to a web page?)
There service has to support two kinds of phones, CIF (a voice over IP phone with a large screen and high bandwidth, and QCIF, a phone with a small screen that has limited bandwidth. As a result the video clips used for both types are different. (The QCIF phone basically doesn’t use video at all, just very limited still images) Another issue he described in the paper but didn’t have time to talk much about was the need to save room on the screen to allow phone alerts to be display while this service is running.
He showed a scenario, confirming my suspicion that the service isn’t really using video in any meaningful way, just a video channel driving what is essentially a web interface. (Comment – it occurs to me that one reason to do it this way is that dialing a phone number may be a more familiar metaphor to people than typing a web page address. One could certainly map phone numbers into web addresses allowing you to “dial” a web page and with some intelligence in the phone even use the number keys to interact with it.)
He gave some sample guidelines on how to develop the service including limiting video and colors for the QCIF phone and commented on some of the development issues.
RCS is an initiative of the GSM community with 81 companies working on it representing handsets, infrastructure, and applications developers, with the intent of buiding portable services. It is being rolled out in phases, release 1 (2008) Release 2 6/09 and 3 before the end of the year. Interoperability is done using interoperability of IMS systems across operators (if it is available).
RCS includes 3 sets of features:
- Enriched Call
- Enriched phonebook
- Enriched messaging
Many of the specific services are things that you can do today, but the problem is that you need multiple devices to do it and implementations are not consistent. The Goal of RCS is everything on one device with one consistent user interface.
Implementation in
She gave an example of Video sharing, which allows you to enhance an existing call by adding video to it. The complexity is that the Voice call can be circuit switched into the old network, while the video goes by IP through the IP network. Enhanced Phonebook allows essentially a “Buddy List” type display where presence information is added to your identity and indicated to others.
Deployment involves putting in a Presence server with some
application on it, connecting to
She showed a very complex call flow across domains
implementing the presence and social information functions. Both Presence and Video sharing require
traffic estimation (Comment – I think
what she really meant was measurements) One
issue is just the number of messages – some messages are used whether the
feature is every used. This is
especially true of Presence services. (Comment – yes, presence services can be a
performance nightmare because they generate a lot of traffic unrelated to
telephone calls just from the messages being used to report changes in
availability to your “friends”.)
Question: What does the requirement to connect to the address book mean? The first implementation makes that connection only in the client, they want to connect inside the service to assist in making call logs most useful (I think the book is used to do calling name association with phone numbers)
Question: How do you compete when every operator implements the same services? Is interoperability good for operators? (She didn’t really address this one).
The two key questions from the last few days are “Where are the applications”, and “Show me the Money”. He feels like he will answer those questions. We have seen a lot of great IMS applications over the past 2 days that are becoming feasible technically, are they feasible as a business? He believes that these will become feasible. He personally will be calling home tonight using a video call, getting a personalized video and audio ringback. If his wife’s number is busy he will get diverted to an IVVR where he can leave a message and get a callback. The scenario went on, though it was difficult to follow without the slides, which had failed for this talk. Basically what he described was a very high functionality and highly personalized call and messaging scenario. One aspect was the ability to move from limited bandwidth mobile to high bandwidth WiFi service while maintaining the call and adding or dropping video.
(At this point things derailed because they could not get the slides to display)
This service is deployed in the TDC Denmark number. This service can be deployed using IN + IMS, but there are limitations related to things that cannot be done in IN, so they centralize the implementation inside the IMS network. “Legacy can be one of your worst enemies” – what this means is that it is tempting to continue to use a legacy interface like CAMEL, but this is ultimately limiting in what you can offer. Wireless voice call continuity cannot be done in IN (Comment – I’m not sure I believe that one specifically, but he is right in general. The biggest problem with IN and Camel is that it is not consistently deployed.)
One feature they support is transfer of a call from one terminal to another (IN to SIP, etc.) (Comment – he again says you can’t do it in IN, and I’m beginning to think what he means is you can’t do it with using the CAMEL SCP. The architectures that I worked in the past used the existing switches and triggers, but with a new application server/SCP capable of controlling the application using both CAMEL and SIP)
His architecture is IMS based using signaling and user (media) plane. The separation here allows optimizing the media path once the connectivity is known. This is connected to both IP and GSM radio networks, and under control of a common service layer. Voice over GSM is being done using circuit switched voice and standard GSM call control, but they use a connection from the GSM network and the application server to trigger and route the call into the IP IMS core and anchor the call. This is done by a “Mobility server”, which makes the GSM endpoint look to the rest of the application like a SIP phone. This includes handling the need to do SIP registration for this endpoint
Services include hosted PBX and supplementary services. The IMS border nodes (media gateway, media gateway control are integrated into the GSM network in order to eliminate the need to export “E1” links to the IMS. ( Comment – this is all very familiar. Exporting circuit trunks or ISDN interfaces from switches to talk to other nodes is a major cost element that is avoidable if you can integrate the voice gateway into the circuit switch.)
There are some interesting problems in charging that result
from the fact that GSM has costs SIP doesn’t, and in
Question (Rebecca Copeland): I’m a great believer in IMS, but it’s being stalled by operators who don’t believe IMS is the answer. She has a problem with the assumption that you move straight from GSM to IMS without intermediate configuration using mobile softswitch. FMC is not the biggest motivation for operators, they don’t want to do much with their legacy voice endpoints, they are focused on more data bandwidth (LTE). VCC has been almost completely forgotten, what people want is manual movement from one device to another. (Comment – Interesting, this was one of Personeta’s core services) Many more problems, her question is still where is the money, where is the killer app that drives you to IMS?
Answer, don’t anticipate that operators will move to IMS for voice, Maybe they will do it in conjunction for LTE, but not now. He feels the real motivation is actually to get better service control than can be done via CAMEL, (Comment – yes, but this can be done by routing the call into a simple softswitch/feature server without the whole IMS infrastructure. It’s really a control architecture issue)
Question: You are one of the few people who know both IMS and CAMEL well, what are the main innovations in control? Answer – the separation of control and user plane with the control plane running SIP end to end. CAMEL has many limitations because of the association of ISUP and IN. SIP allows mid-call modification of the media path, CAMEL/GSM does not. Decoupling of the core network and access network is another one, which enables more services. (Comment – I’d agree with this, but again the greater control is really a function of SIP, not IMS, and could be achieved with a much simpler architecture not trying to address all the things IMS does.)
Question – how long will circuit switching survive: His answer – probably at least 12-15 years. The problem is the need for compatibility with 2 billion GSM devices (Comment – yes, but those devices don’t last that long. This is an interesting example of how really hard it is to move on to a better standard and how interfaces can outlive the equipment on both sides by multiple lifetimes (analog modems, fax, and the 10base T interface are others.))
Juan Carlos Luengo was the session chair for this session.
Bernard was presenting this paper for two authors from ALU China who could not attend
He went through the architecture for enhancing user interfaces by describing various components of the IMS architecture devoted to tracing user behavior and enhancing interfaces. The UBTF function collects user service requests for logging, but it also deduces “behavior patterns”, and stores the result in the GUP database. The result can then be used to customize menus and choices in new sessions through the UI function.
He went on to describe the details of how the UBTF collects data and stores it, and the UBCF communication based on subscribe/notify to gain access to user service request. He talked briefly about the behavior inference function and showed two example scenarios. (This work seems quite preliminary, many things were described according to what could be done, rather than what had been done.)
Question: It seems like the CSCF is involved in bandwidth negotiation, that’s non-standard, is this what’s proposed? (This comes from one of the examples). Answer, it’s just an example of how one could do more. (Comment – again, this seems quite preliminary and not completely thought out)
Question: Does it make sense to store this in the GUP? Just an example.
What’s Quality of Experience? -- It’s the users’ subjective judgment of how well the service is meeting user needs. This is subjective and depends a lot on how informed and experienced the user is. Quality of Service is measured objectively by measurements of the performance of network delivery. There is a correlation, but not a predictive relationship.
Different types of services require different quality of service from the network (not rocket science here) He went through the standard procedure for achieving Qos, but said that for QoE he wants to change the way goals are prioritized to line up with what is important to the user. There are several ways of approaching measuring QoE. One is survey (like Mean Opinion Score traditionally used for voice quality) (The paper gave several others too fast to record here.)
He showed a large chart of potential measures and definitions of Key Performance Indicators around different aspects of QoE for users. The services and metrics would then be prioritized to match what is most important to the user to create measures of experience.
He described a complete architecture for collecting, reporting, and analyzing the metrics based on the key performance indicators. I think the intent is to then allow tuning to improve the perceived performance of the system.
He described some of the metrics collected, then gave some sample results (too complex and brief to really have much idea what was being achieved, the charts were not readable). One observation is that there seems to be a lot of correlation in the metrics – when it’s good it’s good, but when it’s bad, everything is bad.
The intent here is to decide what levels of performance will result in good user experience and allow the network to be “detuned” to achieve that level with minimum resources. What is adequate in one country may be lavish somewhere else, and expectations may be quite different making it impossible to use the QoS measurements to achieve this. (e.g. if users in one country will consider service excellent in spite of a 10% blocked or dropped call rate there is no reason to deploy enough radio sites and network bandwidth to achieve a rate of 1%).
Comment (Max Michel): You need to distinguish between contractual quality and perceived quality. iPhone met it’s contractual QoS, but the lack of availability of 3G access caused people to be unhappy with the phone even through it worked as advertised.
Social Networking tools are on the increase everywhere. . Within enterprises sometimes they are dictated, but they quickly become part of the culture. Tools alone are not enough -- shared purposes and shared goals are what build social networks. There is a difference between Effectiveness and Efficiency. Effectiveness is about how well you achieve a goal, Efficiency is about what resources are used to achieve a result. He gave some general info about Nokia/Siemens as a company. They targeted the Operations and business services unit, about 4,000 employees, with a survey.
26 questions assessing the user profile and whether a portal
would improve the user experience. (Comment – it looked a bit at this point like
they re really trying to decide whether or not to implement this “portal”). They heavily promoted the survey through
existing tools to people and personal emails.
They got responses from 5% of the population and considered that
good. (Comment – I was surprised nobody asked about this. A 5% sample may be sufficient if it is
randomly selected, but this 5% is presumably the 5% willing to respond to the
survey, which could be biased in many ways.)
They had lots of interesting data on what tools people use. There weren’t many surprises, Youtube, linkedIn, Facebook were very popular. There was a lot of data on use of other tools, demographics, and how the users network. All are in the paper, impossible to summarize here. (Comment – the particular graphical representation of the survey responses is quite confusing to me at least)
Question about what keeps them from using more tools and 3 things stand out (difficult to search, too many tools, and too little time). 80% liked the idea of a portal, only 5% disagree.
Some interesting results on users -- Software developers and younger workers are less active on social tools than others (Comment, probably a combination of time and personal styles) Those who call co-workers in other countries are the most active. (Comment – yes, those will be the “extroverts”).
Question (Chet McQuaide) – Would you find that if the tools exist and are consistent you get the Microsoft effect (learn one and you know how to use the others). Maybe, but that’s not what they think they can do.
Question (Dave Ludlam) – what’s the conclusion on lack of interest in user profile. Does that mean that people find everyone equal. (Comment – it’s clear that it is not clear what is meant by user profile. I thought what users were saying is that they really didn’t care about editing a personal profile to reflect what they were doing Ala Facebook, LinkedIn, et al, Dave clearly envisions being able to look at someone else’s profile as a way of validating another’s authority to give an opinion. I think the result may reflect that programmers, the bulk of the work force, and probably less extroverted than the rest of the population.)
Question (me) – how was the portal presented? It sounds like the proposal for solving “too many tools” is to add another tool? Answer – the portal was presented as personalized to the person and to the task.
Question – was there any attempt to get social graphs and correlate the results to see if some groups used some tools more than others. (No, good for next year’s followup)
Question – is the lack of interest in profiles because they don’t contain much interesting in company databases. If it had a search facility would it be more interesting? (Not clear).
Question – social tools spread virally – people tell their
friends about cool things, can we take advantage of that? (Comment. In
Question (Chet) – Sometimes pure social connections, like employee clubs, can be very useful in establishing networks. Maybe this is a good substitute for profiles? (Answer – it’s not clear to him that this would work. (Comment – I don’t think these are really equivalent, but Chet is right, most of us have used our non-work-related social connections to help gain answers relavent to work.)
Comment – just the term “Social Networking” may introduce a bias against tools for someone who associates this with tools they consider “frivolous” (e.g. Facebook, Twitter). Not clear.
Question – did you consider what may be the most important informal social connection – people who go outside the building to smoke, and others of this sort? No, didn’t study it.
Ubiquitous Computing + Ubiquitous Connectivity = The Internet of Things.
Mobile Internet is a fact. 25% of the population use internet, over 60% are mobile subscribers. (Comment – amazing number)
He covered projections for how many connected endpoints there would be in 2015 and 2020.
He went through a lot of standards initiatives related to this including naming, payment, location, and other issues.
To really exploit the internet of things, he is looking for a way for ordinary people to program the interactions of all their devices. Web2.0 is a good starting point, but how do you make it possible for an elderly woman to build a mash up?
“The Internet of Things introduces a long-tail of networked items” – some things are really networked publically, some are behind a home gateway, many more are nomadic, and many more are in a “swarm of nodes”, only networked in a mesh through others. The service provider can provide help in managing and programming for the networked objects, but not for the long tail.
He gave a humorous example of members of a bicycling club looking for a riding path with the most bumps (Skiers may have been a better choice). They could do it using an accelerometer in the phones of all the bikers and ad hoc networking to share the information and build a n application to engineer routes from this.
His thought on filling the gap of empowering people to program include application enablers, domain-specific enablers, and at the far end “DiY enablers that help people build applications like the one just described.
Some conclusions:
Web of Things 3.0 is coming.
Over the top approaches might not work (Comment – not sure why not?)
There ought to be a role for service providers.
Question (Rebecca Copeland) -> How do you prevent malware attacks, which could be very bad if they attack something like smoke alarms or panic buttons (or power grid sensors). His answer was the network operator and home gateway have to provide filtering. (Comment – that’s not good enough. If most of these things communicate wirelessly behind the gateway, nothing prevents someone from committing a “drive by hit”, using a networked machine to introduce malware on an insufficiently protected node.)
He started with an OMA view of service architecture based on IMS.. Service Composition is a key problem – allowing services to be built independently and then composed into a single user experience. Two different things are going on:
- Service orchestration – coordinating multiple services in a single session.
- Session blending
IMS doesn’t really support service composition well – no APIs. His solution was adding a new element, Service Enabler Gateway, and an application server that implements the orchestration and blending. All communication is via SIP. (Comment – not sure what’s new, this sounds a bit like the SCIM) They chose to use JSR289 (Sip Servlets) for the composition server. He went through the architecture of Sip Servlets to indicate why that could be used to do orchestration. The key element is the application router, which allows multiple applications to access the SIP sessions. Three models:
-
Distributed Feature Composition (AT&T has a
standards contribution on this). This
uses knowledge of user subscriptions to applications and a precedence list, and
makes decisions based on region (originating/terminating/neutral), subscriber,
and is influenced by the application.
This is a fine grained composition. (Comment
– I am strongly reminded of the blending of “Flavors”, in Common lisp. It would actually be worth studying the
evolution of Flavors style blending since they discovered the need for some
unanticipated ways to compose services.
- Finite State Machines (SCXML) This is a coarse grained mechanism. (Comment – there is a long history of feature interaction management based on FSM)
- BPEL – this is used in web services and is basically a mechanism for describing sequences of processing and data flow between. (Comment – Yes, but services are much more temporal in nature than most things done with BPEL, which is why most approaches have relied on state machines in some ways.)
Features of the Service Enabler Gateway (This is the element that separates the SCSF from the orchestration/blending server and the real service enablers.) It performs policy based authorization, service discovery.
He described related work, including SCIM, which he correctly indicates is not well defined.
Question (me): Did you look at work on feature interaction in call control? (He answered yes, but it didn’t seem like he did that much)
Question: Did you consider coordination with the base call control services defined for IMS (MM Telephone)? Yes, you have to put these into a service enabler. (Followup, yes, but can you coordinate with elements of MMTEL, not the service package as a whole.)
Questions (Rebecca Copleland): Can you do this with logic downloaded? (Considered that)
The paper was presented by John O’Connel
for the author who had to cancel her trip to
The driver for this work is that the service market is shaped by rich and personalized interactive experiences. There are a lot of rich services out there. Personalization can be much more effective with context information which the service providers now have but don’t use. The question is how to do this and how to monetize the sharing of context information.
Many different types of context: Identity, location, account status, usage history, devices, profile, friends and friend status, etc. We assume that real-time context is the most valuable, but that’s not always the case (it may well be more interesting to know what film genres I like to watch and how often I watch than what I happen to be watching at the moment.)
The measure of success is that users should feel they are getting better services as a result of this.
Semantic Web Technology is a way of organizing information (OWL) Web Ontology Language is a primary tool. (Comment – interesting, it’s been 25 years since I worked on this area in conjunction with expert systems. It seems to be still very active)
Many ontologies have been done around wine as an example. (Ontologies give properties to information and relate facts.) A Wine ontology could be used to recommend wines for a meal or which wines might be likely to be best in a particular region.
HP has an open source platform (
He described how this could be used, taking information from all kinds of sources and figuring out how to personalize services for the user. This is not unlike the service interaction problem described by the previous speaker.
The Semantic SDP
- First generation was just exposure of Parlay, et al.
- Second Generation was Web2.0
- 3rd generation includes context management and inclusion of both network based and outside information and applications.
He gave a layered architecture for context management, then asked the question: How to monetize it?
- personalized recommendations
- Advertising
- Fraud management
- Management of ad hoc communities
- Automatic invocation of services (push)
An example – a personal recommendation
service for travelers. In the
example, a traveler to
He went through the use of the proposed architecture to generate the recommendation.
There are significant privacy issues here. They use the existing privacy/profile management capabilities of IMS to restrict access to the user profile according to user preference.
One thing he did when preparing was ask whether the telecom
industry has shown any interest in
Question (me) -> How do you solve the problem we have now which is context based recommendations with incomplete context information. Answer: you have to filter the recommendations with what makes sense. (Comment – actually the telecom provider may have an advantage here in having a more complete context than isolated web sites like travel booking sites do now.)
Question (Maurizio) -> Semantic Web is only one of many AI technologies, this looks like an AI kind of problem, have you looked at others? Yes they have, but he wasn’t in a position to elaborate because that was her part of the work.
Thought (Chet McQuaide) -> Telecom focuses on real-time services, and has not really done that much with context in non-real-time uses. Bundled services are sold naively without using context to personalize the recommendation based on the subscribers usage history.
This paper won the best paper award for business models, The author could not present it so Dan Fehman, TPC chair, presented it. Dan is very familiar with IKEA since it was originally a Swedish company.
The key need is to move from mass market services (build once, sell millions), to mass services market (build many customized for the user).
IKEA sells furniture and other goods that require some assembly. This allows customization. The model won’t work for everything (showed a picture of what would happen if Ikea sold cars). The IKEA effect is that people learn to love the process of acquiring things there in part because they have some feeling of accomplishment from doing the assembly and in part because they get a customized product at a mass market price.
He showed an example of mass market versus mass services with groceries: A little store with little selection and a line of buyers versus an enormous market with lots of choices for people.
The customer experience depends on the whole value chain – where is there competition, who gets the branding, how is supply chain optimized.
The Ikea effect is “labor leads to love” -> you get satisfaction from putting effort into personalizing things. (Comment – I struggled with understanding this paper, probably because I have never been in an IKEA store. Perhaps another analogy helps. My father was in the advertising business and among other products helped market Kraft prepared foods. He was involved in consumer market research that discovered that there was an optimal level of “readiness” for something like a prepared dinner or a cake mix. Customers were more satisfied with the result if they put some of their own labor (and in many cases some of their own ingredients) into the product than if everything is done for you, but it can’t take too much effort.)
Applying this to telecom means allowing users to participate in creating a personalized version of the service that addresses their needs. Users become fans and sell the service to others.
The economic slowdown has hurt telecom – usage actually declined in 2009. more subscriptions, but less usage, and deep discounting to get subscribers. Some business models in use:
- MVNOs – slash prices to gain subscribers, then sell out to someone looking for a customer base. (Not a good model for the current economic climate)
Telecom 2.0 is about smart cooperation. What do users want – Free (or at least inexpensive) and powerful and stable. This implies some kind of subscription model.
He showed a business model for mobile telephony. He feels that technology has drastically lowered entry barrier since services can be built cheaply on the web to personalize the service and scale the service.
Question: Dave Ludlam – Is the future business models for MVNOs the same as that common in the past (gain subscribers and sell their business) or is there something else? Linus’s strategy is to sell out. Dave also asked whether there was an opportunity to become global more quickly using Web technology, which didn’t get a complete response.
Rebecca is a frequent ICIN speaker mainly on services and IMS, but has been doing so much travel she recognized that the airline industry and telecom have lots of similarities. (Comment – yes, I agree, both have huge fixed costs and a rather generic product to offer to the end user meaning they have both been hurt by a commoditization of their product and overcapacity.)
There are differences. Telecom is subscription based air travel involves a purchase decision for every trip. The nature of the infrastructure is a bit different, and telecom is a daily need for people while air travel isn’t for most.
Common trends:
- The internet has a big effect (low fares, etc.)
- blurring of enterprise vs consumer
- loss of national pride as a factor
Impacts of a “no frills” policy – this is the heart of the disruption in the air industry because it relegates the industry to cost based competition. Unfortunately it’s here to stay. The same thing is happening to telecom. Skype/VoiP services have basically given you no frills voice (Comment – actually situation is worse. Skype is first class service for a bargain price compared to traditional offerings from the traditional carriers.)
Globalization – people don’t care who they fly with any more and national carriers fly anywhere. The equivalent in Telecom is to be able to operate anywhere in the world. Not there yet except in over the top IP carriers. One perhaps unexpected consequence is that both air carriers and phone companies no longer get national support (Comment – US air carriers and telecom never did, but this was common elsewhere else )
She pointed out the problem of VoIP carriers operating without regulation while traditional carriers are limited.
One problem – the internet is not always available, that’s a
myth, but the mobile phone is. (Comment – well maybe, but my laptop is on
line here, my mobile phone is a dumb brick (not on, not connected) outside of
the
On line checkin was a driver for
check in machines. Pushing the checkin onto the user and onto user equipment has been a
major cost savings for the airlines.
It’s acceptable to the consumer.
Telecom has to learn from this.
Airlines have learned to charge for “luxury”. People will pay for it, telecom need to do it
too. (Comment – airlines have set a new standard for “luxury” – food, a
reserved seat and checking a bag all used to be basic, now they are extra. Could Telecom get away with something similar?)
Cost reduction is essential. Some things the airlines do:
- “DIY” (self checkin).
- No refunds (everyone buys non-refundable tickets, practice is spreading to hotels). Telecom needs to learn from this one.
Managing capacity is a common problem in the industry. Demand for capacity in broadband did not get hurt by the recession. Quality and safety are not negotiable. The airline industry has gotten very good about filling planes. They size flights very dynamically and they even share equipment with competitors (code share) to do it. Telecom could learn from that one.
Some interesting changes:
- Cities have gained power – cities build airports, cities are funding broadband, because infrastructure brings business.
- Some businesses are really hurt (airline caterers)
- New opportunities – the airline car park syndrome. With low cost flights, parking your car may be more expensive than the flight – people may choose airports based on parking cost rather than flight cost or convenience.
Lessons for telecom:
- “No frills” is here to stay.
- The disrupting players are not always the winners (Freddie Laker failed after setting the standard for price competition.
- Adopt web style “light touch” customer support (moves costs to users)
- Telecom can bring web apps to everyone, not just PC users.
- Non-internet business, like legacy telecom, may not go away
- Web is a king maker of brands, but they are short lived (brands don’t last long on the web.)
- Remember the parking lot syndrome
- Flexible capacity is critical.
- Municipal power is on the rise.
- May need to create scarcity to get prices up (happening somewhat in mobile data).
- Don’t forget the luxury market. (She talked about phones for sale for thousands of Euros – made of gold or platinum. They do sell them.)
Question/Comment: Video is a common disrupter to both airlines and telecom. John Chambers says that Cisco cut travel by 50% in using video conferencing. Video is a common reason for people saying they use Skype vs traditional telecom. (Answer – Rebecca was involved in a major effort to roll out video services for BT. The main problem is that everybody expects quality if they pay for it, Skype can roll out rubbish, BT can’t, not at least if they charge for it. Nobody knows how to do anywhere to anywhere video with quality. (Comment – actually BT and other traditional operators have a special handicap since even if they don’t charge for it the customers have an expectation of quality that they don’t have for players like Skype and won’t accept the same level of quality as they get from Skype.)
Question/Comment: Airlines have formed international alliances in order to become ubiquitous (one frequent flyer program, one scheduling, etc.). Telecom has to figure out how to do this. Answer – paper covers this. Alliances in the airlines started out very weak, have gotten much stronger (gave an example involving KLM flying certain routes for others). She said that clearing houses for telecom were one response, but they need to be much better.
Question: In telecom we can anticipate a requirement for universal service. Airlines don’t get this. (significant numbers of people have never flown). Any implication for telecom in this?
Question (me) – what’s the implication for vendors, who are much more like the suppliers to the airline industry. (Alcatel-Lucent isn’t like United, they are like Boeing). (Answer – she did look at this, airline vendors suffer harder and faster than airlines because their cycles are much longer. ) (Comment – I’m not sure what’s really in this analogy. Airlines have few suppliers who supply big things. Telecom companies have many more. The safety requirements and regulation in airlines makes a big difference, even a discount carrier needs an FAA certified plane, not the case for discount telecom.)
Question (Chet McQuaide). We have talked about global wholesaling in telecom, might there be a model like this in airline (global company owning planes, pilots, maintenance, and gate rights and wholesaling them to marketing companies that sell service to consumers.) (I don’t think this provoked much discussion.)
Question (Dave Ludlam) – Will there be global regulation of social responsibility (universal service) and does that help (not clear.)
OpenID is a common user identity system that allows single signon for multiple services. It’s a distributed implementation and user centric, allowing the user to obtain an identity from one provider and shift to another provider without having to change their identity. The solution is based on URIs, familiar to users and developers. The benefits to the user include friendliness (single signon) and security. Service providers have a much simpler problem of user registration.
An OpenID identifier is a uri (http://ivar.ubisafe.no) 3 actors in the scenario: the user, the service that wants to verify identity, and an openID provider. Most providers use password authentication to sign the user on. It has been widely deployed, 50K service providers support it (growing at 2K/month) and 500 million users.
SIM authentication uses the standard SIM card authentication
techniques. UnifID—uses
a mobile with a SIM card or a USB dongle to connect to a SIM card. There are lots of portable identity
frameworks (SAML,
Presence services are popular using SIP/SIMPLE in IMS/NGN. There are Parlay APIs for Presence, and some proposals using REST have been made. Telecom is behind other areas in supporting Presence (Twitter)
When they experimented with Presence in IMS using REST they had two problems: Authenticating presence information, and increased SIP traffic related to notifications.
Basically they solve some of the limitations of REST in IMS by caching the presence states on the SIP server to avoid having to constantly go back to the presence server.
Question—did you also look at the pilot API under development for IMS? (No).
Question (me) -> Does using REST solve the problem of SIP/SIMPLE generating a lot of traffic when there are users whose presence is watched by many people? (he didn’t really answer it.) (Comment – this relates to a problem encountered in building a hosted enterprise solution using SIP for call control and presence. We discovered that the messaging that took place to update presence status using subscribe/notify was much more than the call control traffic and could easily overwhelm the server and create performance problems unless you separated it through some kind of intelligent routing.)
The scenario is to provide push notification of local points
of interest that have been recommended by ones social network. There are existing applications like
this. Google Maps
- Mobnotes which allows people to share information but is linked to social network so you can see notes made by your friends.
- Augmented Reality locations. POI radar (sweep an area with your mobile phone camera and it marks it with opinions of others on places in view on the camera.)
POI Radar was built to validate the technical aspects of the service, they haven’t addressed the business model. The idea is that you walk around a city without having to scan with your phone, but when you get near a POI noted by your social network you get alerted to it and can look.
They use a TI Context aware platform that collects a lot of context data about location, CellID, Wifi and Bluetooth context, etc. It also gathers information from Facebook and Google open social. Finally it has information about the points of interest. A recommendation engine processes this to generate which points are important, and that is mashed up with the location context to decide if the user needs to be notified. The applications that are involved in the mash up uses REST APIs that hide a lot of the details on how things have been computed from the application.
ContextML -> This is an XML language to express context, including location (which includes accuracy, technology used, and expiration time).
Social Network Context is also expressed in XML (ContextML) which includes information on which of your contacts you trust for what (e.g. you may trust one person on restaurants and another on photographic sights).
The service is interesting and potentially useful, but there are a lot of issues with services like this around privacy and security.
Question: It would be useful to do planning by “pretending” to be somewhere, navigating on a PC and getting recommendations. (Comment – yes, this captures what I was feeling a bit odd about. Mostly when I do things like this I use Google Earth or some similar tool to navigate through a city looking for points of interest. That may be more interesting than having it available in real time (i.e. why would I go to a particular part of the city if I didn’t already know there was something interesting there)
Question: For this to work does the service provider become the Identity provider? (Effectively yes, but it’s a subset identity and links to identity in the social networking sites.)
Question: The application needs access to private information like buddy lists, how do you control it. (He didn’t really cover it, but there are access controls in this platform that allow you to set different levels of control over who can access various parts of your context. Igor noted the IETF is doing work in this area.
This year I was chair of the poster session. ICIN has been doing posters for the past 4 conferences. Papers in the poster session are selected according to technical merit, topic, and institution and tend to include novel topics that don’t fit into the sessions as well as papers from universities and small companies that have not appeared in ICN before. This year 11 papers were presented as posters and they were available for the entire conference. I will describe only a few here which presented ideas that were especially interesting to me.
This poster explored the challenges in constructing a “smart” electrical grid, noting that communication is key and many of the issues are exactly the same as in building unified communications network. The poster proposed adapting the IMS architecture and IMS components to perform this task, and how the open service environment of IMS would allow new applications for the power grid to be created and deployed to use energy more efficiently.
The desirability of applying communication technology to help solve our significant challenges in energy production, distribution, and use was cited by many speakers and will no doubt become a theme in future conferences, but this paper was probably the only one that significantly explored the issue and presented an interesting solution.
A*Star is a national research laboratory of Singapore especially interested in commercializing new computing and communication technologies and has been working with automakers and others in this area for some time. This poster addressed the growing need for communications with vehicles, noting that this includes communication within a vehicle, between vehicles, and between vehicles and external networks. It talked about potential security issues and solutions for these needs, proposing a vehicle “gatekeeper to provide secure communications between networking inside a vehicle and the outside world. ICIN has had papers on Vehicle communications in the past, and I expect this to be a growing focus in the future. The poster was a bit sketchy but the paper has much more information and extensive references on this area.
The paper had co-authors from both
There were many other interesting poster session presentations covering such areas as service creation platform, “software as a service” vs “platform as a service”, business models for mobile applications, and specific applications.
Dan went through the session summaries for the individual
sessions. Dan didn’t try to read these
but just put the summaries. There was
way too much information for the time available, and no real incentive to
discuss. (Comment – this was unfortunate.
Maybe we were just too burned out after 4 days of tutorials and
sessions, but I think an opportunity to draw some overall conclusions was
lost.)
For overall conclusions from the time one point he brought out was that there are many comparisons possible to other business, Telecom has to learn from them.
Second, many people noted there are too many programming standards and fragmentation is a real problem, we have to figure out how to reduce them.
Third, home networks and networks beyond the home are becoming much more important and many devices will be connected. We don’t understand all of the impacts of these on the network.
Chet McQuade (Vice chair of the TPC) gave some of his common themes:
- making services personal
- global services versus services limited by regions
- extract value from what’s already available.
- Content effects it’s own delivery
- Energy and Environment opportunities related to communication
Max Michel put up the definition of ICIN and asked for reactions or omissions, anticipating what should be in the call for papers for the next ICIN. I will not go through the slide he used but instead summarize what people suggested as omissions:
- Consider the endpoint and the end-to-end principle and their impact on service architecture (Me)
- Look at the world as a “network of networks”. (Someone from Telecom Italia)
- Green and Telecom – maybe need a green session
- Machine to machine communications
- Complexity factors across our industry, how do we handle system/service integration addressing vertical markets (as needed for new revenue).
Stuart Sharrock went through some
administrative details. Stuart made it
explicit that he now owns ICIN. He said
he wanted to make a special owners award of honorary membership to Christian Chaubrand as the most loyal supporter of ICIN. (Comment
– Christian has been with ICIN since the beginning. This will most likely be his last ICIN as he
is leaving
ICIN 2010 will happen, it will be
in the same season, it almost certainly will not be in
Roberto Minerva will be the new chairman designate (and become TPC chair in 2011).
Max talked about setting up some kind of on-line community to continue the discussion for ICIN. Max then made the awards for best presentation.
- For the first day, Roberto Minerva of TI. (The same paper won best paper.)
- For the second day the award went to the paper on quality of user experience.
- For the 3rd day Michael Brenner from ALU won the award for a paper in data mining (I was not in that session).
Dan finished up by talking about what a challenge it was to
organize and how it could have been done without email,
he got over 2,000 emails (Stuart sent 666).
As a member of the TPC for about half the conference history, I can
attest to the importance of emails and audio conferencing in organizing this
conference. It is notable though that
the most critical process – selection of papers and organization of papers into
sessions – takes place in the one face-to-face meeting held by the TPC outside
the conference with only limited input from those who can attend only by
teleconference. (Comment – “The next best thing to being there” (an old AT&T Long
Distance Slogan for those who don’t remember it) Is
still a distant second place.)
With that ICIN 2010 was closed. The TPC and IAB had an organizational meeting afterwards to begin to discuss the various logistics for the next conference and made progress towards a call-for-papers which will be issued soon as other arrangements are solidified.