By Warren Montgomery (firstname.lastname@example.org)
This is the 11th ICIN conference, a conference which had it’s origins in the “Intelligent Network” era and is held every 1-2 years. ICIN has been re-orienting itself towards the convergence of IP, Telecom, and IT and is now more about Web2.0 and IMS than traditional networks, but it remains the place people who do research and applied research in intelligence in networks come for 3 days of intensive technical sessions and discussions.
Some of the more interesting themes coming out of this year’s conference include:
What follows is a much more description of the conference sessions and the most interesting technical discussions I had at the conference.
Attendance was down this year, 150 vs about 200-250 average for the past several years (The peak was more like 500 in the late 1990s). As a member of the technical program committee I was part of the discussion on reasons why. One is obvious – Alcatel, Lucent, Siemens, and Nokia each sent a delegation in the past, but now mergers have reduced the number of participants. The problem is more general though because in the past many attended from the big European PTTs, but few now. (Comment – Alcatel-Lucent is still the largest single participant, which doesn’t bode well for future attendance) This is a shame really because most attendees, especially new ones, indicate this is the strongest conference in the area of service architecture around. (Comment – many of the people who have attended a lot of ICIN conferences said this year’s program was the strongest ever)
One of the constants of this conference is that it is always
I was not registered for tutorials but had the opportunity to sit in on the end of this tutorial focused on new models of service creation basically being described as “Web2.0”. The tutorial was presented by Fokus, a German research lab. Some of the tidbits I gained from this include:
An interesting footnote at the end: Fokus provides many
IMS components in an open-source format to enable more interoperability testing
and foster development in the network.
They run an annual IMS workshop, next one in November in
He was an invited speaker asked to present on security. Security is a very old topic in networks, and has had very different definitions, with older concerns being mainly about protecting the secrecy of communications and the integrity of networks. In the modern internet issues of Phishing, SPAM, Spyware, etc. come up. Those mainly represent properly formed software, and communications that has unwanted results.
Will the internet ever be secure? No – basically it’s about preventing crime. Crime is human nature. You have to change human nature to prevent crime, and internet security can’t do that. One problem with achieving security is that it’s often someone else’s problem -- different service providers operate at different layers. Your access provider doesn’t know your IM address or your email address and can’t defend against threats related to them. There are lots of threats out there and it’s hard to figure out how to block them. He gave an example from his own experience of having “ActiveX” disabled because he feels it is too good a way to introduce “malware” onto his machine, but being told by some websites that he can’t use the site unless he enables it. That means he has to decide on a case by case basis whether the site is legitimate and worth any potential risk. This is a hard thing for most users to do. (Comment – I can relate to this as someone who disables a lot of web technology for the same reason and faces the same problem. The basic problem is that those who designed the technologies we all rely on to display and interact with internet content did not take security into account from the start and did not limit what could be done by these technologies sufficiently to make it possible to use them safely.)
Most threats are at the application layer, not the network layer. Addressing threats is largely about identity – if you know for sure who email is from you eliminate many threats. Stronger authentication (two factor authentication) is being pushed by Microsoft for this. (Comment – much of the problem is that people don’t want to deal with awkward authentication technologies. It has to be worth a hassle to do it, and generally people feel it isn’t. I personally believe that we would be more secure if more web sites would simply do things anonymously without requiring authentication, as when users are confronted with the need for multiple passwords just to display their hotel points or read weblogs, they tend to fall back on the same ones they use for things that really do need security, like on-line banking, or fall back on a general scheme for passwords making it easier for someone to get a hold of the important ones.) Cisco has software to monitor operating system use by applications and detect potential malware from its behavior. Nothing is 100%.
What’s the role of Intelligence in Networks?
25 years ago we had the debate on smart networks with dumb endpoints versus smart endpoints and dumb endpoints (“The end to end argument”, and “The stupid network”) Now, the issue is more what kind of intelligence you put in the network. Even Stupid networks have lots of intelligence for security and routing. You want intelligence in the network that enables “smart” devices to use it well.
What the industry worries about is intelligence in the network interfering with “net neutrality”. Youtube et al weren’t there 2 years ago. No way to have a walled garden approach without stifling innovation, because new services are always created outside the “walled garden”.
Example – how do you validate IP addresses associated with a device. There are 5 different places in a typical network that addresses can be spoofed. Ther is a high value to do this because it prevents “bot attacks”, and a lot of other security problems come from bot networks. Having verified identity also prevents Spam. (Comment – the actual talk went into detail about how they were going to achieve all of this. I think it required some modifications to underlying protocols to do right, which from my experience means it isn’t likely it will ever happen.)
All this identity management and prevention of identity fraud is from the perspective of the user intelligence in the network. (Comment – one could argue that there are 3 essential functions of intelligence in the network, Security, Routing, and Charging, and this fits)
“Many applications aren’t IMS based and never will be – don’t plan on IMS providing the answer to security
Value comes from putting new capabilities together. (Many examples).
“In the past 3 years an entirely new class of network operator has emerged with subscribers equivalent to 1/3 of the global mobile network subscribers” – Social networks. New users don’t use email, phone, or traditional IM, they do everything through social networking. (Comment – he talked a lot about second life. I have read a good deal about this one, which is basically a world simulation with lots of virtual environments. People actually make money creating things in “second life”, and selling them to other users for real money. Of course like every other new technology, a lot of it is about Sex, Fraud, or Gambling J I have also heard that second life is in decline)
Social networks have much more “stickiness” than traditional networking. They also have different growth characteristics from traditional networks. They support multi-point communication, which he said enabled them to grow according to “Reed’s law” which says the value of a network grows exponentially with the number of connections, versus “Metcalf’s lay, which applies to point-to-point networks and says the value grows according to the square of the number of connections. Exponential growth is much faster. (Comment – you can take these laws with a lot of skepticism, but I can see the argument that multipoint communities grow quicker. They also become useless quicker because of the amount of unwanted communication you start getting.) (Comment. When I heard this I suspect that “Reed” Was David P Reed, my office mate at MIT who did a lot of the early work on TCP/IP and went on to join others advocating end-to-end functionality. That turns out to be correct. He has an interesting paper from a few years back presenting some of the math and analysis behind the claim of exponential value growth on his website, easily located through a search) In June 2006, Myspace nudged out Google as the most popular web page
Interconnecting disparate communities has lots of value -> Presence is a stimulant for instant messaging and significantly increases IM traffic no matter how it is done. 1/3 of IM sessions transition to voice calls.
IM/SMS networks in the
Oplan is a non-profit focused on
open networks, really focused on education for “the common man”. He is a
He went through some history of Colt, a
He started with 7 fundamental principles driving what happens when things collide.
“Survival is not compulsory” (Demming). He closed with a slide he says is 25 years old and comes from an item in the Economist, which talked about how companies still operate as though they inherited permanent control over their markets and customers, and they are powerless to keep them in the face of superior technologies or competition.
$1.6 trillion today is based on the model of the network as a tollroad with charges. If we go to a new model based on advertising, the total is 1/10th of that. If that’s what happens, is there money to plan the next generation of innovations?
Imode is everyone’s favorite
example, but it has largely collapsed and had no impact outside of
Question to Panel – should what should we do?
Baker – last century was all about putting copper in walls. Copper hasn’t served tour needs well. Fiber is cheaper, more bandwidth, and less subject to tampering and theft. Bury more fiber.
Matson – take the money saved by not spending 1.6 trillion on infrastructure and put it into new things. Problem is that big companies don’t know how to do it. They are excellent at milking continued slow growth. All the CEOs know there is a big crash coming at some point, but the strategy seems to be “cash out before the crash”, rather than figuring out how to embrace change and build a sustainable future.
Faster – all the CEOs they talk to are very worried about
the change in the industry. Operators
are trying to remake themselves.
Example, FMC – France Telecom and
Question: (Rezza Jafari, chair of the ITU) 60% of worlds population has no access to communication. We have a developed (western) world bias in our thinking.
Mark – You are right but our industry is learning. A few
Question: Chet McQuaide – Highways show “the tragedy of the commons”. Open access, lots of capacity, but traffic jams make them unusable. Won’t this happen to the internet? (Comment – in past discussions with Chet he has brought up the internet communities stand on Net Neutrality as being a problem for BellSouth, now AT&T several times. The right not to be neutral is clearly a strongly held belief for him and/or the company..)
Mark – If there’s a need, the open market answer is to let people build to it. Chet’s interpretation was that there would be a business opportunity for alternate “toll” highway, and in a separate conversation said that’s the Telecom view on why net neutrality is bad. Telcos want the right to offer a two tier system, an open highway for one price and a “toll” highway that costs more (maybe paid by the web sites). (Comment: My response was that the trouble is just like the early days of regulated and unregulated services, nobody trusts them not to overcharge for their basic service to subsidize the toll service.)
This session was really a continuation of the themes of the keynotes, looking at the impacts of new business models for communication services.
Roberto has been a leader in services in TI Labs for many years (I worked with him in the Bell Labs/IN days.) His paper was selected for one of 3 “Best Paper” awards. He describes his role as changed, more looking at trends than nuts and bolts. One warning to us – when we go into the new world, we are the new comers, Google is the incumbent. They have been working in this for many years and know what is going on, we are the onlookers. Roberto has been involved in looking at service brokering for many years (Comment Service brokering was a major theme of TINA-C, which Roberto and many other old timers in network services were involved in) His contention is that Google is doing everything that people who envisioned service brokering for telecom did.
If you look for papers on Search engines you will find references from 1998, which is about the only published information from Google. Things have changed (but not been published), so he has been exploring the details of what Google really does to better understand how they perform the brokering role
Google has been doubling revenue every 12 months path for several years (10.6 Billion in 2006), though slacking a bit in recent years.
Virtually all of Google’s expenditures go into their data center. In addition to searching, it supports caching, which gives them the ability to provide high performance high reliability access to web sites (Google will serve pages up out of its cache rather than the public internet). (He said many say there are 2 internets, Google’s internet and the real internet, and Google’s is more important to most people)
Google’s page ranking algorithm is based on linkages – value of a page is dependent on how many links there are to it and where they come from. The algorithm operates in a Web crawler that is distributed and replicated many time. (Comment – the paper described this in more detail)
Estimates of Google’s data center run from 60K to more than
100K servers “Pizza box PCs with 1-2G of RAM and 80G of disks. They use their own version of Linux, a modified version
of Redhat (mainly in the file system to allow
management of bigger files, dividing them into chunks and replicating the file
chunks.) “Bell Labs was the academy of telecom, Google
Google data centers are very powerful and near (at least in net connectivity) to their customers. He gave a couple of flow charts for Google’s advertising engine. Lots isn’t published, it’s on Blogs. Nobody outside of Google really knows exactly what is in the algorithms (That’s there value added and they are protecting it)
TI Labs has been monitoring Google releases recently. TI Labs spent a lot of effort building their own services software based on SIP. Google is doing it better for less (beta test, no guarantees, but it works). His example was 7 people built their VoIP implementation, and he asked how many hundreds a traditional telco would employ to do it (lots). (Comment – though what I know about Googletalk doesn’t seem like a gigantic implementation path to me, Roberto has studied it harder and is genuinely impressed by an apparent breakthrough in productivity represented by doing it with only 7 people though again I think sociology and situation have more to do with the result than any specific technology)
Google forces users to identify themselves to use most services and builds a user profile on you. Google reads your email and your SMS and uses it to tune the ads that it gives you for more value. (He gave an example of sending emails on the death of a friend and getting ads for florists and other bereavement related businesses). (Comment –he wasn’t happy about this, but I’ve been wishing for years that someone would really understand what kinds of things I might buy and limit the advertising to those things, since now, I get tons of ads for things I have never bought and will never buy. To do that they have to learn more about me, and I wonder how many people are willing give up some privacy for the sake of less junk showing up on their screens)
Final slide showed IMS architecture versus Google – both will deliver the same services in competition. Google is much simpler –2-3 relays of HTTP messages for one call setup versus 48 SIP message exchanges in IMS according to one analysis.
Google is the broker in a lot of contexts – advertisers to users, users to services, access users to advertisers, etc. (He described Google WiFi and the coming Google 700MHZ broadband service.
Google gives away all the services that telcos consider their bread and butter and charge for. Service is free, but comes with ads, Advertisers pay.
Who can stop Google –
Google and operators – Cooperation or war. Not clear how operators are responding to Google. “It’s absurd to try to block access to services, shooting yourself in the foot because your users are the ones who get hurt and they will go elsewhere”.
Question: The Google model of service is free and advertisers pay used to apply to radio and TV broadcasting, but now many things are available only if you pay. Is advertising enough?
Roberto – have to get more aggressive. If you are looking for the world cup match, Google would come back and say “we know you are looking for a car, if you are willing to watch a 5 minute advertisement for a car you can watch the match”. Maybe too different levels of payment depending on what ads you see, and Google does have the structure to collect payment.
Question: How do you
compete with Skype?
Roberto – familiar with Skype, set up a
relative with it, and she used it to nail up a connection between two offices
they used for informal conferences all day long and loved it. The point is Skype
isn’t telephony. In
Bernard has been in the network intelligence business for many years. He is a past chair of ICIN. He now manages Wimax standards for Alcatel-Lucent. He was presenting for another Lucent speaker who couldn’t travel for the meeting.
Convergence has many definitions: Telecom/IT, Fixed/Mobile, Enterprise/Mobile, Media and Telecom, Voice/Data, Communications and consumer electronics. He went through a lot of reasons why Convergence has value to carriers and customers (not much new here).
His message was basically that there is value in convergence and there are lots of issues in making it work (network efficiency, etc.). The talk didn’t present anything surprising or very new, probably because he wasn’t the original author and was no longer working in this area.
Question – surprised you didn’t talk about billing. (Answer – yes billing is important)
He started with a description of DT labs and their open services effort as basically being about how to bring new people into working with DT to add value to the network. He described what is going on in the migration to IP as basically being “delayering”, reducing the layers and opening them up. (Comment – that’s what should be happening, but I’ve seen way too many architectures that make IP MORE complicated with LESS openness)
He talked a lot about how to package network components into enablers that could be opened (presence, connectivity, etc.). Enablers have to be open, allow combinations, be under user control, and support commercialization.
Presence is a great example, but to become useful it has to have more cooperation (not just a silo). Users have to have more control. Need to go to many sources, then NGN as well. Value rises sharply with what can be provided and how good it is.
One way to do this is to create a wholesale model – telco’s sell to an aggregator of service enablers ( “Enableco”), which federates all their enablers and sells them to companies which build the services. (Comment – in many areas this indeed would be useful to allow a user to have a choice of mobile carrier, fixed operator, and internet provider(s), and get unified services. In the early days of IN/IP convergence, my team prototyped “internet call management”, allowing you to manage calls made to one endpoint from an IP client – for example, monitor calls made to your home phone while at work). I quickly realized that my home phone service was provided by a different company than the one that served my office, and this would never happen unless the two cooperated somehow. DT Labs is proposing a different model that may work where each operator exports what they want to provide (e.g. the IN trigger points needed to detect and control the call) and a 3rd party then aggregates them and provides them to a service provider)
Anders Lunquist (BEA Systems) chaired the session and talked about the changing role of service delivery environments in next generation networks.
IMS is derived from IP and is in the right direction, but it is only focused on SIP. Akogrimo adds other kinds of applications. Their particular focus seems to be on supporting the GRID environment, the ability to share services and capabilities across many different endpoints. The network operator/IMS is the broker that helps get people together.
One problem is that GRID environments are based on SOAP, not SIP, so how does IMS manage SOAP sessions? One approach is SOAP over SIP, another (theirs) is to use SIP to set up a SOAP “session”, using special SDP to describe the session. He uses the SIP registrar to register SOAP services basically so that one can use normal IMS location services to locate appropriate services.
(Comment – I had high expectations for this paper, but in fact found the material extremely confusing, probably because I don’t have the detailed background in Web services and GRID environments that the presentation and paper built on.)
Question: The idea here seems to be to add things to IMS networks that can’t be done by SIP. Give an example. The example given was any service based on web services involving multiple parties. The interactions between parties are all done with Web Services, not SIP, so IMS doesn’t help – it’s oriented to setting up sessions and there aren’t sessions here. Akorgrimo extends IMS to set up web services “associations” among the parties. (Comment – I think I understand this, but it’s subtle. I have to wonder whether we aren’t using different language to describe the same things). “We don’t have enough imagination to envision the new services, but building a structure that can put enablers together to allow new services to emerge has to be good”.
There was a lot of discussion on why IMS would or wouldn’t support SOAP, and whether GRID was standard Web Services or not. Discussion was cut off for time.
KT has more than 20 IN services deployed in the network today. Many customers have several services they subscribe to. Their customers want converged services, which requires more development, and they want to preserve those services across technologies.
They are looking to build new services around an IN Services Gateway. Service platforms connect via SOA/XML to INSG, which mediates connections between services to allow one service to use another (e.g. Call notice server accessing SMS server to send a notice). The INSG also manages global profile data so that each service has data in the common profile database.
He described several call scenarios. Then presented an example of a limitation: Two call control services that want to share the same trigger (Comment -- He needs what we call “TRIM” Trigger Interaction Management)
A second limit is trying to access the database which holds service data – usually the key for the data is the calling party number or called party number, but some services (Personal number) apply to neither. No service can access any other service’s data.
His view of the alternative is an intelligent broker system with an SDP.
He is the chief technologist for Sun’s SDP effort in their media technology area (not sure exactly what that means). His example for “why use SOA” is Facebook. Over 5400 Services have been built in 3 months based on an API, query and markup language. (900 were built in the last 2 weeks!) The applications distribute “virally” among network of connected users and drive a great deal of advertising and transaction revenue. Facebook got 4M new users in 3 weeks. This is what the real next generation service competition is, Telecom has to understand how to play in an environment with agile competitors like Facebook.
Many telecom operators have chosen to sit aside from efforts to define SDP for many reasons. But, this is a key issue for them. Content is NOT king, it’s <10% of communication revenue. Communication is a much larger market. Operators won’t win by capturing content while losing communication business. SOA/SDP is a way for operators to look at creating better structures to empower service development.
SOA is about understanding your business practices and making your services follow them. The paper gives an “Eye Chart” of best principles for engineering business practices. The paper focuses on service delivery and service creation.
He argued that Telecom does need some special low latency environments that may not apply SOA as much: SLEE, SCIM, and some other real-time components may need to differ from the platforms used for Web and enterprise services, but networks should be architected to keep these special needs platforms to a minimum so most software is built in more common environments.
One aspect of Service Creation is Orchestration. This is ideally done with scripting in Web
Services using WS-BPEL. This is great
for quick prototyping. You put service enablers together into process flows,
and if you have the right service enablers you can cover a lot of services. Web2.0 “Mash ups” are similar, but often
based on different composition rules.
Most are client driven (e.g.
Service Creation is only part of the problem. We have a good handle on that, what we aren’t good at is service management – deployment, provisioning, assurance, usage/charging. We have to create manageable services, not just new services. Composition makes this harder because services then have dependencies. (Comment- I believe that a large part of the problem people have maintaining personal computers is the complex and poorly understood dependencies of applications on shared libraries (DLLs) and the registry. It would be a disaster to recreate that experience in telecom!)
Concluding – the industry needs to radically rethink how we build and deploy services and what platforms we use if we want to compete against the “new” service providers out there like Facebook. Service creation, service orchestration, and execution are beginning to be understood, but we aren’t good at service management.
Question (Chet McQuaide) – Is the operator OSS/BSS structure an advantage or a roadblock. Not sure which. If done right it can be a big advantage.
Question (Warren Montgomery) – SOA and other things we have heard about create new dependencies – services built from components and shared profile data. This is good for productivity and cost, but many of the problems users have with PCs come from poorly understood dependencies on shared libraries, drivers, and databases. In the past Telecom avoided this by sharing little and exhaustively testing. How do we insure we don’t introduce problems in building next generation SDPs. (No real answer to this other than you can do re-use well and have to provide incentives to people to do it.) (Comment – several people approached me after the session and mentioned that this was an interesting question. After almost a full day of hearing speakers talk about the superior service creation capabilities of the IT world we need to understand that Telecom has done some things well and that the flexibility and speed in IT don’t come without new risks and costs.)
His study involved Instant group communication (IM) – why IM? It has been a big success in the web, and there are good opportunities for composing services.
He went through some work they did by using web services to build applications mainly on client systems working with IMS based servers. Portability is a hard problem, as is access to all the capabilities of the client. Complex services need more than Web Services alone.
Web Services do apply to the whole architecture – they are useful on the client as well as the servers. They did believe that by exposing the right interfaces they could enhance services through composition.
Question (Michael Trowler, BT) – what’s the right level of APIs? Good question. There is a tradeoff between being complete and being too complex. Parlay is good for building communication applications (but too complicated and low level for those looking just to add communication as part of other services) His view is if you want to focus on a particular area like group communications, a high level focused API is best.
Rick is a database specialist who was formerly a professor at USC. (One of his co-authors is Al Aho, noted and now retired algorithms and data structures specialist and author of several classic texts) (Comment – I remember his name, and believe that he worked in one of the organizations applying formal mathematics to protocols and communication programming)
Most service blending has been about exposing information from a network and using it in a transaction. What they were looking at was “shared experience communication (sessions) and managing the lifecycle of sessions.
Theirs was a “
Example – family portal – “always on” group conference for a family with access control to join. He said that ParlayX couldn’t support seamless upgrade from 2 party to 3 party conference (Comment – I don’t know, but I don’t think ParlayX really addresses conferencing). There may be controls on whether new users can join, and the ability to bridge in additional devices and media types.
Overall architecture used a WSDL over Soap/HTTP to a session manager that controlled the actual services, and to web servers providing the user interfaces.
Sessions have one or more bubble, each bubble has parties and media. They have a rich predicate language that is used to specify events and notification so that information can be filtered coming up. The bubbles use state machines to describe the operation of the session. (For example, each participant will have a machine describing what whether the participant is invited, joined, listen only, speak only, etc.)
A lot of the detail of making the session work (escalation from 2 to 3 party, hangup, etc.) is taken care of by the underlying model.
Question: How does this compare to programming in IN? Answer – joined too late to have expertise in IN, but he felt their model was very simple and intuitive. (Comment – One of the problems with IN was that there was no standard programming model. Each vendor developed their own. Some gave very simple models for simple services, but only the most advanced of the IN standards addressed multi-party calls and that wasn’t extensively deployed, so services like “Family Portal” would almost certainly have been done with ad hoc programming on a service node.
He is an IN specialist, and was involved in a joint venture
between Sprint and
What are good criteria for deciding on re-usable functions? They have to be economic in nature:
(Comment – as someone who worked on re-use in telecommunications for some time I think these are quite a reasonable set of criteria. Unfortunately they require both experience (“banality” is often not apparent until many services using the function have been built) and in depth knowledge (Traffic optimization isn’t always obvious), which is probably why picking good abstractions remains a difficult problem.)
Is a Call Log a good candidate?
What are criteria for success: Cost – not a mission critical service, so not worth spending a lot to make it absolutely available. “Make it “multi”-- make it apply to as many networks and devices as possible. The whole idea is to let you see the state of your calls from as many places as possible.
He talked about how to integrate the call log with other
services – enablers may operate at many levels – protocols, Parlay call
control, and services. Call Logs really
belong as a service level enabler. Note
Their work focused on developing services for small/medium enterprises in order to determine the appropriate framework for introducing services using SOA. The lab used a service creation workbench that composed service building blocks. The intent is to enable end-user service creation from standard services.
The example used was a location based service where a customer entering a designated area gets some kind of SMS (Comment – interesting, in the US this would be considered SPAM, in part because the receiver pays for the SMS, like getting junk fax calls which waste the resources in your machine) Another example involved streaming video from a device at a soccer match to a central server that would then distribute it to your buddies. (Comment – wouldn’t this be considered piracy if the match in question had any kind of broadcast licensing agreement limiting distribution?)
They used Parlay X 2.1 as an interface to deal with the network which provided much of what they needed (But of course they found it didn’t do everything, and had to extend it!)
The prototype ran on the Open Source IMS Core (described in Monday’s tutorial). It also used an IMS client framework. Feels that the importance of clients in IMS is underestimated – hard to have one client that is capable of supporting all services, so they built a framework and customize their clients. (Comment – I can understand this. Given multi media and communication types clients have gotten too complex to assume you can support anything from one)
Question (University person) – Paper says you use Parlay X with Policies. What are the Policies? The worked use Policies from the OMA service framework and approach to identity management. Also used BPL based composition of service building blocks. They used Parlay based access control.
Question (Dave Ludlam) – Comment on the fact that Parlay was first introduced in 1998 and we are still discussing call models, vs. what is happening in the internet space.
Parlay is responding to the problems with conferencing, call merger, and multimedia (Comment, Yes, but the response has been SLOW) Lots of discussion resulted. Rick Hull described various carriers’ approaches: AT&T Wireless has a Parlay deployment mainly for data and mainly used in partnership with companies wanting to interface to send/receive SMS. BT introduced Parlay (But in a later conversation a BT person said that they have withdrawn the interface. In a followup with Dave and others, we outlined some reasons why the internet community has a much easier time:
In other words the problem isn’t the technology, it’s the environment in which it operates. (Comment – If I follow this argument it suggests that the telecom industry is doomed. There is no way that it will ever be able to be as agile as industry not saddled with the same environment of operations and over time all new services will migrate to the unregulated, paid by advertising internet and telecom will be left with providing basic service, which it does quite well.)
Chet’s paper came from the former BellSouth company joint with his team. He presented an IMS view of a common real-time services architecture. It’s applications on top, Web and IMS in the middle, and LOTS of networks and access on the bottom. (Comment – if I put on the “what would Facebook do” filter for this I’d say this picture is WAY too complicated. Why make it work for all access forms and interwork with everyone if my customers are all on IP?)
He showed the 3GPP IMS picture we all know and asked why there was no service broker that allows combination of applications from various sources including Parlay and CAMEL. SCIM is just a few lines of description in 3GPP. There is discussion of trying to do this in 3GPP. Chet repeated a comment from his co author (Nick Hlujack) on the observation that our industry moves at a glacial pace: “Glaciers are the only rivers which can move forward but still retreat (due to global warming)” (Comment – This may be one of the best summaries of the industry I have heard J )
Chet made the point that you really need brokering among services even in a limited domain – if 1 leg of Simultaneous ringing services has Voicemail, that voicemail may capture all the calls. (Comment – yes, this is why service brokering is essential and also why it’s not obvious how to do it in a general way)
Chet illustrated the 3GPP Filtering rules for SCIM – allow simple selection rules based on identity, message, and direction, but basically very inadequate to capture all of them.
Two problems for service brokering – the “real time” problem, mostly what we address, and the “operations” problem of subscription, billing, packaging, etc. There are vendors who address this and it’s important, but not addressed here.
What are the requirements? Standard, Programmable at design time, low overhead (bypass when needed, but not simple to do this because you want to be able to add it dynamically without re-entry of data), Address the whole scope, operate in various service architecture.
Some architecture slides – shows a SIP/IMS structure with SIP applications and a Web application stack, with some common enablers that feed both, but you also need a broker that can work with both and interface with OSS/BSS to enable converged applications. He put up a chart with a data description for all of the data behind a service broker without explaining much (in the paper).
My talk seemed well received. I covered motivation for TRIM, some call flows, evolution to IMS, and some practical implementation issues.
Question (Rick Hull) – can you describe TRIM brokering rules with a common language? Response – mostly I believe you can do it using XML rules, but there will always be exceptions where you need deep protocol access and service knowledge.
Question (J Hartog) – will the brokering be as complex as the service logic? Answer – maybe, but the real value of saving the current implementation is its integration in service management and the preservation of the user experience.
Not all convergence is going at the same rate – The core is going to IMS faster than access. That means you have to bridge architectures during transition. (Comment – we talked a lot during breakfast as well. His focus has been on delivering IMS services into CAMEL networks while many have looked at the converse problem (CAMEL services to IMS))
His network picture showed CAMEL and SIP-ISC architectures – without coordination you have separate services and no guarantee of consistency. You have to duplicate services.
Two architectures for convergence – one based on access network – triggering occurs in the network where service is requested. Service platform has to deal with both CAMEL and SIP as a result. (Comment – this is the situation I see most people addressing)
The second architecture triggers everything from the SIP/IMS core and routes everything up to it. (He talked about this being the model for VCC, the standard for fixed/mobile convergence service which allows a mobile phone to reconnect to a WiFi network and reroute a call in progress while preserving “Voice Call Continuity”)
Some examples: Dual phone use.
Multi access phone (Wireless LAN HSPA, and GSM) He talked about the UMA approach. He went through some specific examples of call migration from one network to another with each of the two strategies. Doing this with access triggering the big problem is synchronizing Camel triggering with the existing call when the call migrates into CAMEL. It’s a bit easier when the call is anchored in the IMS network and services are triggered in the IMS. (Very complicated examples)
Access triggering is preferred when existing services are extended or when existing services get data enrichments.
IMS triggering is preferred when new services are designed or when converged services are, or multi-access phones are used.
Questions: Is the IMS triggering real, or is it theoretical. Any practical experience on triggering CAMEL services into SIP? It sounds nice, but does it work? His answer is that yes, there are examples and a lot of standardized mapping for putting CAMEL information into SIP including location mapping and all the details. (Comment, but has anyone actually built it?)
Question: Have you looked at the TRIM/SCIM architecture, why not? Each has a use (Comment – I believe it depends on what you want to preserve and extend. TRIM/SCIM is about extending the existing services into the future. IMS is about new stuff only)
She presented a long exploration of the various standards for SDE and SDP, really focusing on the OMA view of service creation. Nothing exceptional in the talk.
Discussion: I had a lot of followup discussions with various people on service brokering and service triggering. One big question is how the TRIM/SCIM/IM-SSF view fits with the two views Ericsson presented on brokering. It seems an intermediate approach (trigger in one network, and potentially serve to the other only when needed). I agree, though I think the real determinant of which approach you want for service brokering depends on what you are trying to preserve in your network migration. Those with a large investment in IN services looking to move forward will prefer the TRIM/SCIM/IM-SSF approach, while those with little invested may want to start clean in IMS.
What they have is a simple open source (linux) web server running on a mobile device that communicates mobile information back to the network. He has a demo he hopes to be able to show. The mobile web server (Raccoon) makes content on the mobile phone available to a mapping server, which then combines information from Google maps to deliver pages back to whatever mobile requests it.
The server registers all users (some outside of Nokia) He described the implementation, basically using a web services interface into Google Maps to translate the GPS data from the handset into a location. Other functions that would be available if he were connected include the ability to activate the camera on the phone, send SMS message
(Comment – After several false starts he got it up and I could access the server from my laptop using Wifi access. I could see locations for 23 phones, only 3 of them on line. Eventually I got to his phone, and could get a picture of the conference room with the phone.)
They have a few hundred users of this and draw some conclusions: Location is valuable, Google maps are very nice. Big problem is battery consumption – the phone burns its battery responding to location requests and running the browser.
The main point here though seemed to be to illustrate a new kind of service that could be created as a “mash up” using the facilities available.
Question: What service ideas based on location do you have? Finding routes, finding things near buy, Location driven advertising. (Comment – again, why isn’t advertising pushed at a phone, triggered by location considered SPAM?)
Question: Can I sign up for this? Yes, you can download the software from http://raccoon.openlaboratory.net/RaccoonOnMap/RaccoonOnMap.html. (Comment – I’m sure you need the right kind of phone and the ability to download new software, which may not be commonly available, but it’s interesting.)
ICIN has attracted papers from this university for many
years. They generally are good technical
papers. I believe the interest stems
from an IEEE conference on IN held in
Currently there are two approaches to service creation: Full Parlay, and Web Services. With Parlay the main problem is complexity (registration, etc.) He talked about the Parlay X service and what is available and what isn’t to introduce the need for extended call control. They followed an ECMA view of call control.
He gave an example of a virtual PBX delivering to
Question: So what’s different? They used a different approach to determining what’s required and arrived at about the same place. The implementation isn’t tied to the API or a particular network, you could connect to Asterisk, mobile network, fixed number, or anything else.
(Comment – I worked with him in the past and especially his absent co-author, Carlo Licciardi, during a time when Lucent and TI Labs had a joint program in IN. Claudio was heavily involved in JAIN SLEE at one point.)
He described some standards or consortium efforts aimed at mobile web:
He described an effort that they had to put context around content – uploaded mobile pictures, to create a more attractive format for sharing information and drive increased use. Called Teamlife, it allows you to put photos on a map, group them, use the location information from phones. All this was done with Web2.0 techniques. (Comment – I don’t think I saw much in it that wasn’t in Google Earth and their related photo placement sites)
Horst Thieme (Sun Microsystems) chaired this session and begun with a recap on how many people have predicted the death of Telecommunications, but that business models like this will allow telecom operators to “fight back”.
LINUS is I believe a market research organization in
Fixed and broadband operators for IMS looking for cost effective means for Internet style applications and a path to convert legacy to IP. FMC and penetration of the mobile market are motivations for some.
Pricing models (BT Fusion) – 5 pence for
He went through a lot of other requirements focused on users and operators. Basically I didn’t see a lot of surprises in any of these. One was interesting. Mobile Operators mostly aren’t mentioning IMS – it’s an implementation detail, not a requirement. They focus on services.
DT has been trying to look more and more outside to see the
future. It’s difficult now since there
are thousands of companies building applications. He views that for the first time telecom has
the opportunity to benefit from
Recent challenges -- The Web is a natural bypass of telco’s R&D – lowers the entry barrier. Radical challenge – new players change the landscape through decoupling.
We are in a period of experimentation – it’s not clear what will happen.
Telco R&D has lost control of the evolution of the network – more players and new entrants have more influence. The right response is to focus on who what really needs to be in the operator network. Have to support interoperability with legacy and NGN, and focus.
He looked at three trends in society and implications for needs:
Then they looked at different building blocks for services and how they would support a person responding to each of these trends in order to determine which ones support all approaches – those things are clearly useful in multiple scenarios.
He got a bunch of questions on how their process was working. They feel it is valuable and has helped them separate the applications and technologies that require telco knowledge and are not duplicated outside (the things they should work on) from those which are available and better done by others.
He described their thoughts on how to get there based on a scenario driven development process. The develop and refine scenarios for how the service should work, then decompose them and analyze each one for each situation, and build to the scenarios.
He had an example starting from a written description of what would happen, then to a more structured text description, then decomposition into situations (places/times where things happen) (Comment – this is not unlike use case driven design of Object Oriented Systems)
The pieces of the scenario were mapped onto different “actors” who participate in the delivery of the service, then given behavior needed to deliver it. They then map money flow and responsibilities. (Comment – Not to criticize this work, but this is getting too hard and too structured. While it is a nice structured development process, it reflects thinking about roles and domains from a “monopoly”, kind of mindset. The Web2.0 companies wouldn’t do it this way!)
Question (Horst Thieme) – So, what is the one service you would pick?
(Didn’t answer that, gave a time frame for their work that said they would pick the services later and then begin work on a service platform to deliver them. (Comment -- Again, this is a typical “plan then build” kind of process we are all familiar with, I think today’s world would do it differently.)
Question (Horst) – to Heinrich – so, how do you keep the work on services relevant to the development? (Answer – involve the product managers early!)
I arrived at the end of the first presentation because of a late evening the night before.
I arrived too late for the all but the conclusions, but the conclusions were interesting. He was basically stating that intelligence will move rapidly to the devices and the edges in next generation network. During the question period he said the transition would happen much more rapidly in fixed network (Cited BT’s 21st century network as an example.) His message seemed to be that there is no future in SS7 based IN.
He presented their architecture called “ScaleNet”. This is an IMS based network which has been extended with AAA Security and Mobility building blocks that get combined to implement the FMC service.
The work is part of a European Commission project called Daidalos, which is aimed at understanding communication
beyond 3G. (Comment – I’ve always found it interesting how much communication
research is funded by governments and inter-government agencies like this in
Europe, vs. in the
One of the concepts he introduced was the notion of operating with a virtual ID to insure privacy. Your real identity is known only by a “trust center”, usually a trusted operator, who hands out a virtual ID that can be used to obtain services from an “untrusted” service provider. His example was someone known to a mobile operator who wishes to use services from an untrusted access provider.
Federation is then used to extend these IDs across networks. Making all of this work will require minor extensions of the ISS and HSS interfaces (Comment – not sure what extensions were required, but it always bothers me a bit when something which seems relatively minor requires extensions in core protocols)
Question: Can you
elaborate on Virtual ID and how they are federated and what extensions are
really needed? Answer: Virtual ID is used by the service provider
you want service from in order to authorize you and charge you. Federation allows that information to be used
in multiple networks and allows the charging/authorization information from
your personal profile to populate the virtual ID and charging information to
return, without exposing personal information from your personal profile. Followup – isn’t
this just what the Liberty Alliance is doing?
(I think the questioner was from Sun)
His answer was that Liberty Alliance was focused on
SPICE is a joint project between Nokia/Siemens and NEC, and the speaker is actually from a German research laboratory of NEC.
What is a converged service? It’s a moving target. Convergence of transport (IP and Circuit) was what that meant before. Now it may mean convergence of private (home) network with public networks, multi-media, etc. SPICE is another big EU supported project with many partners. The aim seems to be to support 3rd party service providers providing services across multiple networks using service enablers in each network. The focus is on increasing service intelligence using context awareness and “knowledge processing” (I believe knowledge processing really represents drawing conclusions about what a user or service may need through context information like where the service is being requested and the profile of the requestor)
The architecture they use has a layering which includes: Capabilities and enablers, Component services “knowledge layer”, and Value added services. Both terminals and networks have these layers. Networks have an exposure and mediation layer on top of all of this that exposes interfaces to 3rd party service providers (on a separate platform).
The work is based on the Open IMS open source implementation, which interacts with some of the participant’s IMS based network. Their Service execution environment includes a JSLEE based application environment with interfaces to J2EE supporting non-real time services.
The terminal service execution environment is implemented on Windows Mobile, and supports a “desktop” environment with widgets used to implement the pieces. The knowledge layer uses a knowledge base in each device/server which connects into a set of distributed ontologies that organize knowledge about various aspects – identity, security, capabilities, etc.
Service creation is available at different levels. For the end user it is quite limited – basically responses to particular triggering events (Not sure if the events can be extended)
He gave an example that would make automatically find a restaurant when the user was in an area at dinner time. Interestingly enough they had a problem that they were using a Google interface for calendaring, and a few days before a critical demo the interface changed. He correctly expressed that this is a big challenge in building Web2.0 services: Unless you have some kind of contract with the service you are using it can change unpredictably and your service breaks as a result. (Comment – this is essentially the broken windows software problem I suggested in an earlier comment.)
Enablers for converged services (Converged in this case meant working across networks)
Question – are you planning to operate a real network across
Question (Telecom Italia) They are also a participant in Spice, and asked how Spice is doing in influencing standards. Answer – some companies are pushing the results into standards (example, presence for IPTV).
Question – looks like you are doing “major damage” to the structure of IMS, what’s the prospect for success (question was about service mobility to support “full” roaming.) Answer – they are more interested in what the Web community does and he didn’t seem to care whether it fit the existing paradigm for roaming or not. Felt that Telecom will have to fall in line with what the internet does here.
Chaired by Kevin Fogarty, ex BT who publishes a telecom journal now
Michael is the FT representative to TiSpan in this area. His co-author, Bruno Chartres, is one of FT’s key service architects and a long term participant in ICIN
Why use IMS? It’s a common infrastructure for communication and delivery, common authentication, and some ability to use resource management from IMS (though IPTV related resources have to be managed in the IPTV platform, transport can be shared).
The scope of what they want to do includes both Fixed and
The extensions come in the core to implement a “streaming session” representing IPTV delivery, and to the media framework to implement storage for media. Much of the rest is standard IMS (service discovery, authentication, etc.) Setup of an IPTV session uses standard SIP with specialized SDP. Negotiation of SDP handles things like rights checking and choice of media (coding, rate, etc.) The control of the stream goes end-to end via RTSP (doesn’t impact the IMS core).
Broadcast presents some new challenges since the broadcast session is set up by the broadcaster which negotiates the transport resources, then subscribers request the ability to join the session. Again, SIP/SDP is used for setup. Control of the stream uses IGMP (No clue what this protocol is). The main impact on IMS is the need for the ability to reserve the transport resources in the access network to handle multicast distribution of the broadcast channels. He showed call flows starting with the user equipment contacting service control.
He talked briefly about IPTV service combining broadcast and streaming, such as the ability to pause or go back in a broadcast stream using network resources to buffer the video. He also talked about adding normal IMS services on top of IPTV and the ability to use TV as another media type in a multimedia session.
The merger of IPTV and IMS can add value to both and has the potential to help drive adoption of IMS.
Question (Ove Faergemand) What happens when you push a button on your remote control? Does that get mapped into general messaging for IMS? (No answer, but I believe what happens is that these commands get sent end to end)
Question (alcatel-lucent) How does this relate to fixed/mobile convergence? He gave an example of streaming an encrypted session to mobile devices and said it needs no interference from IMS. Answer: Having a session associated with broadcast gives you a way to manage transport resources. Some operators (FT as an example) have insisted that if a user doesn’t subscribe to a stream that stream cannot be delivered to him, even if it is encrypted and he can’t read it without the key.
Question (Chet McQuaide) Which services combining IPTV and other applications have been implemented (things like call forwarding, click to dial, session transfer between terminals). They have implemented the ability to move sessions from fixed to mobile and between rooms (Comment – yes, I think this is a case where you will really need “VCC”, since you can’t just connect again and find the right place to begin, and doing so would raise questions about whether it’s the same session for billing purposes) He also talked about some form of parental access control.
Service blending means more than just bundling – SIP influencing IPTV and vice versa. The example they studied was TV Alert to incoming call and interaction with it via your TV remote to control the call. (Comment – TV alert is an old service, and the cable providers are either doing it or have it planned. Control via a TV remote significantly complicates this over just providing an alert that you then can pick up with a normal phone. I have to wonder whether that’s really what people want to do, since you can’t talk into your remote control, or whether what they really want is to be able to pick the call up via their mobile?)
He drew an architecture and key to it was the set top box and the ability to interact with the set-top, remote, Video server, and web content. They found several manufacturers which produce set-tops capable of running web browsers (Comment – see the poster session on Home controllers).
They had lots of problems related to screen layouts, limits on how the window layout was done and how things like location and color of text were controlled.
Infrastructure concerns. Each set top box needs an open connection to a coordinating server. This creates an implementation challenge because of limits on sockets in the coordinating server (Comment – yes, this is a real problem for a lot of services where there are LOTS of clients) Security is also an issue since the coordinating server may be way up in the IMS application server and the messaging has to penetrate firewalls. (Comment – I don’t quite get this, since IMS is all about setting up secure sessions, and all this really needs is a secure session from an App server down to the set-top)
Conclusions: Lack of standards presents problem. They had to figure everything out for themselves. Without standards, every vendor is likely to do this differently and interworking will be difficult. You need standards that go down to low levels, like handling the open connection for the control and handling of layout in the set-top box. (Comment – he’s right about the set top. Having looked a couple of times at “TV Caller ID”, before, the variety of set-top interfaces and what they did was a concern.)
Question (Warren Montgomery) – Is there really value in being able to manipulate the call from the set-top given you can’t take an incoming call over the set top? (Didn’t answer it, instead explained that if you take the call your phone works).
Question (Chet McQuaide) – did you look at internet call waiting as a model? (Answer, yes, but they found all the solutions to Internet call waiting as proprietary and didn’t want to follow them.) (Comment – I don’t recall how standard some of the ICW solutions were. Most I am aware of use SIP notification to reach the internet client and/or actually open a SIP session to the client. There was a lot of attention given to making some kind of standards oriented approach (i.e. using standard open protocols rather than proprietary interfaces). Internet call waiting has a big advantage in that the client is a smart device with input/output and even in many cases a soft client that can take an incoming phone or multimedia call)
Goals of NextGen TV include delivering interactive TV to all kinds of devices, fixed and mobile. Services include broadcast, streaming, “private” TV channels, advertising supported services (with personalized advertising), and web services.
He went through many services showing mobility of sessions between devices, among other things. He talked about their platform (Comarch CSP) as underlying several services and then interacting with Telco and video distribution networks to deliver the services. (Comment – up to this point at least he didn’t mention IMS. Not sure why not)
Another service was video consulting – basically 3 way call with a video session allowing a “consultant” to see what you want to show him on your video phone (His example was medical, interesting).
“Content Annotation” – This basically was a way of adding advertising or other annotations when a movie or show is paused, and the products are relevant to the movie (the example showed a James bond movie with opportunities to buy a watch, a phone, or a gun which appeared in recent scenes).
Cross media adaptation – here he showed the ability to web browse during video, with the example being finding movie locations using Google Maps. (Comment – I’ve actually wanted to do this when I thought I recognized a location, of course I do this now by pulling up Google earth or some similar application on my laptop, which usually is within reach when I’m watching TV.)
Technologies for doing this:
Head end solutions for TV over IP, CAS/DRM systems for content
distribution, PVR (DVR) systems for recording, media servers, and adServers. They are
working middleware to link all of this.
Technology on the client side available in the set top includes: HTML,
He talked a lot about targeting advertising based on individual characteristics. One interesting notion is that you will have “virtual network operators” for TV – think of it like unbundling cable or satellite so that you can have a custom package of channels, with fees based on what advertising you are willing to view. One point he made is that unlike the internet, where users can control where they want to go, with TV, the provider can completely control what you set “We control the horizontal, we control the vertical” J (Comment – all this kind of stuff sounds a bit too “1984” to me, though as I think I state elsewhere in these notes I think that tailoring advertising to your real interests has the potential to make TV and internet much more enjoyable – no need to mute dozens of commercials for medications for conditions you don’t have and frankly don’t want to see or hear about)
Lots of other services in this presentation. TV banking, on-line betting on TV matches, etc. (Comment – This presentation has a wealth of service ideas, some interesting, some wacky. I don’t know that it has much to say on implementation, but it’s worth looking at just for ideas.)
He is going to launch a new forum www.ngtvforum.org, to look at standards and directions for next generation TV.
Question – have you seen initiatives in
Question (Chet Mcquaide) – In his
childhood most movie theaters were vertically integrated, theater belonged to
Answer – Estaban – one roadblock is that the end-user device can be a closed platform (i.e. owned and controlled by the cable/IPTV provider) and not allow access. Luzar – IPTV is the equivalent of a closed internet – you (the provider) decide how it will operate. (Comment – I’m suspicious whenever I hear things like this. Nature abhors closed systems. Look at AOL, “walled gardens” in mobile, Net neutrality, etc.)
This session was chaired by Kevin Wollard of BT.
He is the CTO of the company. He talked about refactoring taking place over time determining the course of the industry and technology. He proposed 4 patterns of development of markets – gradual (IN, closed proprietary systems) Continuous development (Migration to IMS) (Multiple phases of increasing value), Discontinuous development (big jump, no examples), and hypercompetitive (continuous+discontinuous, which is his view of Web2.0)
He described some examples with the use of SIP. For a fixed operator evolving off IN, he showed a black and white list screening service. They introduced a Parlay gateway and then an EJB based application server supporting access to SIP and Parlay as the programmer vehicle. An important distinction he made was that the service provider is the one supplying the application which may not be the telco. Another example was content push (mobile) This was built on an IMS core with a Telco application server of some sort.
Another example was Televoting, using an INAP to SOAP gateway or SIP to INAP gateway and enterprise server to deliver the services. Question – why use INAP at all? Sometimes an inferior technology can be selected because of a competitive advantage. (What I think he was saying from curves in his material is that for an operator today with INAP, it’s an easier and less expensive/risky path to stick a gateway in and reuse it rather than convert the network to SIP, even if it is clear in the long run that SIP will be a better solution.)
He showed a bunch of supply and demand curves for different kinds of technologies and drew some conclusions about which were at appropriate price points and which weren’t. I had a hard time following all of this. His conclusion seemed to be that SIP was going to be the winner (versus what I am not sure), because of better accumulation of knowledge and improvement over time.
Reputation and history provides a lot of input into which technology succeeds and will be there at equilibrium. If SIP delivers and establishes a good reputation, it will be a winner.
Question (Kevin Wollard) – what performance issues did you have from the SIP based televoting application? Answer -- They supported up to160 calls/second without performance problems.
Comment from the floor (Sun I think) on televoting – didn’t understand how SIP fit in a Parlay or Parlay X solution. Answer -- “We don’t believe in Parlay X” they used Parlay (claimed to have 85% of world experience in Parlay (Comment – that’s a big claim for a company I hadn’t heard much about before). Again restated that he thinks SIP is the winning strategy (Which doesn’t answer the question)
Focus used to act as a research arm for O2 Germany (until Telefonica bought O2).
Telefonica now has both fixed and mobile
The lab uses mostly stuff from their OpenIMS components including a converged SIP/HTTP service execution environment. They have a directory manager from HP and media servers and gateways from Cantata (recently bought by Dialogics)
Their IMS client provides a nice screen display and a bunch of different services. The client is very important in their view – it’s what determines the user experience. Very little being done on standardizing there but it’s a critical part of the solution.
They didn’t want to just re-implement voice mail using IMS. Their solution combines Web and audio interfaces. They used traditional DTMF for the voice interface, but provided a comprehensive IP portal for voice mail. They can select multiple channels for notification (SMS, email, SIP messaging, etc.)
It’s an IMS based solution, uses the MRF for message storage and announcements. The app server runs both SIP and HTTP pieces of the solution. He gave some signaling flows (nothing surprising, they can access it either from circuit or SIP, but the circuit side gets gatewayed up to SIP and handled in SIP, no IM-SSF like function to deal with the call in IN). He talked a bit about how the flow works and a particular problem in handling the 486 response in SIP – using that flow he described it as “implementing an IN control architecture using IP piece parts”.
The media interaction was very simple using a back to back User Agent to control the call. His claim was this was something that the IETF got right (Comment – it took a while, but this is a very simple model of call control. Unfortunately it doesn’t give you much guidance on how to use it.)
The application server had both Servlet containers and SIP Servlet container (connected using JSIP to the network).
His second example was a network address book. It used “SyncML” to allow address books in devices to be synchronized with the network. Their server used the XDMS server to store the address book, and they had to create an adaptor to convert the format to the binary formats needed to sync with the devices.
They put some services on top of it including the ability to click to conference. This uses SIP to initiate the conference sessions. O2 was surprised because their network has no conference facility (Comment -- I suspect what has to happen here is that the conferencing is done in a server in the Fokus IMS lab)
He gave another plug for OpenSource IMS as a prototyping vehicle. IT is available as download and can run on everything including a laptop. It’s not for commercial use, but the aim is to supply a complete IMS architecture that can be used to test new services and nodes without having to acquire all the pieces.
They have just established an “Open SOA Telco” playground, which focuses more on the services layer.
Question (Dave Ludlam) – justify your views on Parlay X. (Niklas) – they are a lab, they have used it and it worked. Walter Zelinski – Having been there they found a lot of problems in Parlay. He feels that Parlay X is basically similar, maybe worse because of performance issues. In his view Parlay X will have a limited niche in non-real time services.
The IMS community has made some big promises:
Is IMS the inevitable answer?
All of this is irrelevant to the developer: Service providers used to have two customers: End users and Operators. Now have a 3rd, developers! What they want is familiar tools (e.g. Eclipse workbench) and concepts. They expect to have packages with published APIs that are accessible through tools and easy to understand. They want to see what it looks like to the end user – the end user experience is a key part of the service. They want useful, simple abstractions – not “plumbing”.
He proposed that what we should be showing developers about IMS is basically client and service network APIs and a big magic box (IMS) in between. He made a side comment that the server side will have northbound web services interfaces to expose interfaces and that Parlay X is the only real candidate. Parlay X 2.0 is too limited, but 3.0 is viable.
On the device side, JSR281 and other standards apply, on the server side, JSR 116/289 (Sip Servlet). He claims that JSR289 doesn’t go far enough because it doesn’t really deal with IMS, just SIP. He showed a complete stack of JSRs in the device which include SIP, utilities, and things like access to calendars, presence, and other things. He showed a simple call flow which doesn’t require much understanding of the signaling by the programmer, but said that we probably need to create even higher level abstractions.
He showed the impact of putting multiple network boundaries between the endpoints in a session. Basically each one has to be standardized enough to allow consistent implementation, but they must be open enough so that each new service doesn’t have to extend the interface.
“M is for Multiservice” – We need to make the M in IMS be multiservice, not just multimedia. Media doesn’t define the service, a device needs to be able to figure out what service is represented by an incoming invite. Basically he was arguing for SIP messages carrying some kind of application indicator that could be used to allow it to have one SIP stack yet route messages to the right application. (Comment – this is exactly the concept that caused me to add a “software address” to the DSL signaling we used in an early ISDN prototype almost 30 years ago. The result eventually translated into multiple layer-2 sessions over the D channel in ISDN)
He talked about exposing interfaces to enterprise – doesn’t want to expose the ISC interface but feels that everything interesting can be done with the client interface and it’s simpler.
How to be developer friendly – you need to have a testbed that can simulate everything on a desktop “SDS” is their simulation environment (60 days for free, then license). It has the whole SMS, but most important it also has a simulated Ericsson Symbian phone. (Comment – As someone who has hacked a lot of code in my past, I believe this is very important. It’s very difficult to develop without a good simulator that can show you your code running in the environment that the user will see. There are always surprises in the specs.)
A good service becomes a platform for new services! We typically draw the world as IMS core + everything else. Instead we should view what’s happening as constantly building up higher level services that become the abstractions at the next level.
Question (Remark actually) I assume Skype is used by 90% of the users, and it’s not IMS. Comment? Answer – Skype was very successful on the fixed side early on. They have done calculations on Skype over 3G and it’s very expensive because of charges for packet data compared to normal voice. “Three” has implemented a Skype lookalike for mobile that looks like Skype to the user and works with Skype but uses normal voice transport. Actually it’s a sneaky way to avoid inter-carrier delivery charges since the call goes to Skype and Skype takes care of that. He said eBay hasn’t been all that hot on Skype. Feels Skype will not be a factor in mobile, but some factor in fixed.
Question (Chet McQuaide) – if users are free to buy their own terminals as they are in much of the GSM world how do you get consistency. (They envision deploying their middleware on any device the user wants during the pre-standards period. JSR 281 isn’t there yet, but when it is, he expects it to solve the interworking problem.
The conference typically concludes with the chairman of the program committee giving a wrap up session which has collected inputs on what was learned from all the sessions. This year they also let the chair of the keynote session summarize it separately.
The keynotes 3 presented contrasting viewpoints and came across rather gloomy to many but you don’t have to look at it that way. One good sign is that we have attracted and interacted with the IETF community, and that the view from those folks isn’t as “hard” on telcos as in the past. He suggested that the view on network neutrality wasn’t as hard. (Comment – I didn’t get that impression really. The internet folks really don’t want the access providers interfering with open IP transport). The future really is in the ability to interoperate with the IP world and telecom world – it isn’t that telecom has no future, it’s that the future is quite different from what we have now, life would be boring if it weren’t. (Comment – different is good, but not if it’s vastly reduced)
The conference at a glance
He summarized the key messages of the conference in several categories. On business models:
On new services and applications:
Service enablers and Infrastructure
The next conference is planned for fall of 2008 in
This was the second year for a Poster Session at ICIN. There were 9 presentations, out of 10 accepted, and I believe the material was of very high quality. I didn’t look at all in detail, so here are some impressions of the ones I found most interesting.
This one looked at how IM has evolved from a niche service to a major service category, and how it has become a component of many other services. It showed use of presence to trigger other services and migration of sessions between IM and other media.
This was an interesting idea of using timestamping, priorities, and delivery deadlines to improve the performance of IMS messaging. The problem it addresses is that the IMS architecture requires many internal messages to establish a session (someone said about 45), and under heavy load it has the potential to break down when messages get lost or arrive too late. The scheme here basically identified the messages most likely to still be useful and gave them priority over those which were likely too late and those which were unlikely to result in completed sessions. A 20% improvement in “on time” message delivery was claimed. (Comment – this is an example of a classic signaling problem in old telecom networks re-appearing in IP networking and the solution is not unlike what SS7 networks have to do under congestion)
This poster presented work in TISPAN towards standardizing an architecture for a home controller that would be able to
control devices in your home connecting to a broadband network via IMS. The intent is to cover all access (Fiber,
cable, DSL, Wireless). The architecture
proposed is basically a “mini IMS”, with the ability to have local control of
sessions entirely inside the home, and to work with the communication network
IMS to control sessions going in/out. I
asked a bunch of questions about things like whether the home controller has
storage for announcements or user media, and he said work on those kinds of
issues is still quite early. (Comment – I have had discussions with
people in the cable business in the
This paper discussed whether Linux was a suitable base for a mobile phone. The message is that the “real time” performance of Linux has reached the point where this is feasible, and the memory and processing requirements of Linux are feasible for a mobile device. (Comment – if you can put windows on one in any form, Linux certainly has to be feasible)
This wasn’t a poster but an actual demonstration by