Notes from ICIN 2007, October 8-11, Bordeaux France

By Warren Montgomery (wamontgomery@ieee.org)

 

This is the 11th ICIN conference, a conference which had it’s origins in the “Intelligent Network” era and is held every 1-2 years.  ICIN has been re-orienting itself towards the convergence of IP, Telecom, and IT and is now more about Web2.0 and IMS than traditional networks, but it remains the place people who do research and applied research in intelligence in networks come for 3 days of intensive technical sessions and discussions.

 

Some of the more interesting themes coming out of this year’s conference include:

 

  • Understanding where the competition really is.  I personally was surprised to see “Facebook” proposed as a next generation service provider, but in the internet model they certainly are.  People build their service experience around what they do in Facebook and the carrier providing IP access is only a dumb pipe.
  • IMS continues, but probably slower than expected.  Unlike some previous next generation service architecture visions (e.g. TINA-C) it does have real deployments, but it is a complex structure and will move slowly.
  • “Web 2.0”, a loosely defined collection of service creation technologies, is allowing the internet providers to build much more rapidly than the incumbent carriers.  There were many staggering examples of the number of services built and the speed with which they were built around Web2.0 provider APIs like Google, Facebook, and others.  My take on this is that while the incumbent telecommunication providers can adopt the Web2.0 technology and improve their agility somewhat, much of the reason for the speed of the internet community is the difference in environment and expectations:
    • Need for industry consensus and regulator decisions in traditional telecommunication versus “just do it” for the internet
    • Expectation of high quality service versus “best effort”
    • Implementation on end user devices and servers dedicated to the service versus shared resources (the point being if your internet based service fails, it affects just its customers who can go elsewhere.  If a telecom service fails it can take down the network and disrupt everyone, so more care is needed.
    • Advertising based business model versus consumers pay (payment is both more complicated to implement and leads to higher expectations which take more effort to meet.
    • Planning on everything succeeding versus 5,000 startups of which 1-10% will succeed.  Startups can be radical, single focus, and try bold things, while many service development plans of incumbent companies plan on a much higher success rate and as a result must take a cautious (and slower) path.
  • Many views of IPTV and Video services.  A key issue is the relationship among content providers, service providers, access providers and customers.  There is no agreement on what the model will be.  The technology enables much richer models than cable or satellite, and in some areas people are exploiting it with direct studio-to-customer distribution, or packaging of custom advertising to support service.

 

What follows is a much more description of the conference sessions and the most interesting technical discussions I had at the conference.

 

Attendance was down this year, 150 vs about 200-250 average for the past several years (The peak was more like 500 in the late 1990s).  As a member of the technical program committee I was part of the discussion on reasons why.  One is obvious – Alcatel, Lucent, Siemens, and Nokia each sent a delegation in the past, but now mergers have reduced the number of participants.  The problem is more general though because in the past many attended from the big European PTTs, but few now.  (Comment – Alcatel-Lucent is still the largest single participant, which doesn’t bode well for future attendance)  This is a shame really because most attendees, especially new ones, indicate this is the strongest conference in the area of service architecture around.  (Comment – many of the people who have attended a lot of ICIN conferences said this year’s program was the strongest ever)

 

One of the constants of this conference is that it is always held in Bordeaux.  This is not an easy location to reach, but it is an interesting one and one with its own attractions.  When the conference began in 1989, Bordeaux was in an economically depressed and technologically backward area in France, and holding the conference was part of a greater effort to boost development in the area.  That effort has succeeded.  Bordeaux has modern and efficient public transit, and the old stone buildings have now been cleaned and restored.  Shops and restaurants are upgraded and if anything it’s a better place for a conference now than before.  The isolation actually helps a bit because attendees spend the entire time together and lots of interesting discussion takes place outside of the conference sessions.

 

Tutorial – Web2.0

I was not registered for tutorials but had the opportunity to sit in on the end of this tutorial focused on new models of service creation basically being described as “Web2.0”.  The tutorial was presented by Fokus, a German research lab.  Some of the tidbits I gained from this include:

  • “Web Services” is showing its age as an interface technology.  AJAX, ATOM, RSS, and other lightweight technologies are becoming very important in the creation of service “mashups”.
  • To bring the internet service creation community into IMS, basically telecom operators need to present interfaces that are comfortable to them.  “Parlay X” is too complicated, we need to be looking for simpler lighter weight APIs.
  • “Don’t use a massive, complex, expensive technology to do something simple” – this was the view of someone commenting on the use of IMS for presence.  IMS is far more complex and expensive than the simple client/server mechanisms used in IP.
  • What’s the business model for Web2.0 services?  -- The best answer was advertising.  An example being using location information from IMS to enable a Google search for local restaurants and provide a way to place a call to order a pizza. Money comes back to Google and the service developer from restaurants who receive the resulting business.
  • One interesting service model developed was a company called “Jah Jah” which offers a service to provide “click to dial” connections using optimized routing over VoIp between endpoints specified by the user.  The company has APIs to allow the service to be incorporated into other web applications.  (One example from their website is putting a “call me” button in your email signature which allows the recipient to easily call you but does not expose your phone number, and interesting service.)
  • “IMS is all about getting a common user profile and it’s happening in the HSS”.  The speaker went through a very familiar dialog on the danger of operators installing multiple independent application smokestacks, and why HSS will solve it.  (Comment – familiar, because I’ve been hearing it since the early days of IN, yet it never seems to happen.  Every technology for getting more commonality and re-use for data and/or capabilities has limits or excessive costs that have driven operators to bypass it much more quickly than anyone expected.)
  • “IMS will succeed because it frees operators from depending on vendors for applications”.  The model here was that using IMS you could put applications on generic servers using open APIs and not get locked in.  (Comment – heard this one before too.  I’m skeptical that it really works.  Nobody implements standard APIs generically, everyone has extensions, and services have a nasty habit of depending on the non-API characteristics of the server, like performance, delay, reliability, and interfaces supported, which are just as effective at locking in customers)

 

An interesting footnote at the end:  Fokus provides many IMS components in an open-source format to enable more interoperability testing and foster development in the network.  They run an annual IMS workshop, next one in November in Munich, which last year drew over 250 people from about 30 countries.  Info at www.openims.com

 

 

 

Keynotes

Fred Baker (Cisco)  Intelligence and Security in the Next Generation Internet

He was an invited speaker asked to present on security.  Security is a very old topic in networks, and has had very different definitions, with older concerns being mainly about protecting the secrecy of communications and the integrity of networks.   In the modern internet issues of Phishing, SPAM, Spyware, etc. come up.  Those mainly represent properly formed software, and communications that has unwanted results.

 

Will the internet ever be secure?  No – basically it’s about preventing crime.  Crime is human nature.  You have to change human nature to prevent crime, and internet security can’t do that.  One problem with achieving security is that it’s often someone else’s problem  -- different service providers operate at different layers.  Your access provider doesn’t know your IM address or your email address and can’t defend against threats related to them.  There are lots of threats out there and it’s hard to figure out how to block them.  He gave an example from his own experience of having “ActiveX” disabled because he feels it is too good a way to introduce “malware” onto his machine, but being told by some websites that he can’t use the site unless he enables it.  That means he has to decide on a case by case basis whether the site is legitimate and worth any potential risk.  This is a hard thing for most users to do.  (Comment – I can relate to this as someone who disables a lot of web technology for the same reason and faces the same problem.  The basic problem is that those who designed the technologies we all rely on to display and interact with internet content did not take security into account from the start and did not limit what could be done by these technologies sufficiently to make it possible to use them safely.)

 

Most threats are at the application layer, not the network layer.  Addressing threats is largely about identity – if you know for sure who email is from you eliminate many threats.  Stronger authentication (two factor authentication) is being pushed by Microsoft for this.  (Comment – much of the problem is that people don’t want to deal with awkward authentication technologies.  It has to be worth a hassle to do it, and generally people feel it isn’t.  I personally believe that we would be more secure if more web sites would simply do things anonymously without requiring authentication, as when users are confronted with the need for multiple passwords just to display their hotel points or read weblogs, they tend to fall back on the same ones they use for things that really do need security, like on-line banking, or fall back on a general scheme for passwords making it easier for someone to get a hold of the important ones.) Cisco has software to monitor operating system use by applications and detect potential malware from its behavior.  Nothing is 100%.

 

What’s the role of Intelligence in Networks?

 

25 years ago we had the debate on smart networks with dumb endpoints versus smart endpoints and dumb endpoints (“The end to end argument”, and “The stupid network”)  Now, the issue is more what kind of intelligence you put in the network.  Even Stupid networks have lots of intelligence for security and routing.  You want intelligence in the network that enables “smart” devices to use it well.

 

What the industry worries about is intelligence in the network interfering with “net neutrality”.  Youtube et al weren’t there 2 years ago.  No way to have a walled garden approach without stifling innovation, because new services are always created outside the “walled garden”.

 

Example – how do you validate IP addresses associated with a device.  There are 5 different places in a typical network that addresses can be spoofed.  Ther is a high value to do this because it prevents “bot attacks”, and a lot of other security problems come from bot networks.  Having verified identity also prevents Spam.  (Comment – the actual talk went into detail about how they were going to achieve all of this.  I think it required some modifications to underlying protocols to do right, which from my experience means it isn’t likely it will ever happen.)

 

All this identity management and prevention of identity fraud is from the perspective of the user intelligence in the network.  (Comment – one could argue that there are 3 essential functions of intelligence in the network, Security, Routing, and Charging, and this fits)

 

“Many applications aren’t IMS based and never will be – don’t plan on IMS providing the answer to security

Mark Faster – NeuStar, Collision or “Coopetition”.

 

What’s colliding?

  • New players from outside traditional communication and largely unregulated.
  • Regulated players including new entrants following the traditional service models.

 

Value comes from putting new capabilities together.  (Many examples). 

 

“In the past 3 years an entirely new class of network operator has emerged with subscribers equivalent to 1/3 of the global mobile network subscribers” – Social networks.  New users don’t use email, phone, or traditional IM, they do everything through social networking. (Comment – he talked a lot about second life.  I have read a good deal about this one, which is basically a world simulation with lots of virtual environments.  People actually make money creating things in “second life”, and selling them to other users for real money.  Of course like every other new technology, a lot of it is about Sex, Fraud, or Gambling J I have also heard that second life is in decline)

 

Social networks have much more “stickiness” than traditional networking.  They also have different growth characteristics from traditional networks.  They support multi-point communication, which he said enabled them to grow according to “Reed’s law” which says the value of a network grows exponentially with the number of connections, versus “Metcalf’s lay, which applies to point-to-point networks and says the value grows according to the square of the number of connections.  Exponential growth is much faster.  (Comment – you can take these laws with a lot of skepticism, but I can see the argument that multipoint communities grow quicker.  They also become useless quicker because of the amount of unwanted communication you start getting.)  (Comment.  When I heard this I suspect that “Reed” Was David P Reed, my office mate at MIT who did a lot of the early work on TCP/IP and went on to join others advocating end-to-end functionality.  That turns out to be correct.  He has an interesting paper from a few years back presenting some of the math and analysis behind the claim of exponential value growth on his website, easily located through a search) In June 2006, Myspace nudged out Google as the most popular web page

 

Interconnecting disparate communities has lots of value -> Presence is a stimulant for instant messaging and significantly increases IM traffic no matter how it is done.  1/3 of IM sessions transition to voice calls.

 

IM/SMS networks in the US fought interoperability with their competitors for fear of loosing subscribers.  18 months after interoperability came, volume had increased 30 fold. 

 

Malcom Matson, (OPlan), “Collision or Coopetition?

 

Oplan is a non-profit focused on open networks, really focused on education for “the common man”.  He is a UK entrepenuer who started the 3rd major telecom company in the UK and the first to be all fiber based. 

 

He went through some history of Colt, a UK startup that was focused on a closed network approach.  He bailed out during the mid 1990s which turned out to be around the peak value of the company.  He bailed out in part because he was at odds with the “closed” approach and couldn’t understand the valuation of the company.  (Comment – the 1990s are a good lesson for anyone making investment choices – if you can’t figure out the valuation odds are pretty good it’s not there.)

 

He started with 7 fundamental principles driving what happens when things collide.

  • History repeats itself.  He discussed the work of a Venezuelan economist studying revolutions.  Each has a common rhythm – 20-30 years of getting started, 20-30 years of “golden age: of unforeseen benefits, with instability and uncertainty in between – that’s where we are today.  Underpinning each revolution there was a specific underpinning infrastructure (canals, railways, electricity, etc.)  His observation – in no case was the next generation infrastructure delivered by the same companies and people (Comment – 25 years ago I was a guest speaker at a Bell Labs management meeting, and the speaker before me made this exact point looking at the chemical industry as an example.  It was in fact much like “The innovators dilemma”, which came years later.  I think this is a great point.  Revolutions don’t happen because the incumbents want them)
  • Characteristics of networks change radically in revolutions (This was pretty much self evident)
  • Flawed regulatory thinking. .  “The original sin” – allowing local communication companies to merge, then nationalize, then privatize, so what started as local control, market driven companies became big top-down monopolies.  What should have happened in deregulation was allowing new entrants to get in bottom up, but instead we used 100 years of monopoly thinking and regulation to create competing “clones” of the existing companies.  He went through an example of how canals came into being and what would have happened when railways came into being if we had the same regulatory mindset as we do for telecom – replacing horses with steam engines pulling barges.  This wouldn’t be awful, everyone wins to some extent, but what really happened was railroads developed unregulated, killed the canals, and lots of unexpected benefits.  (He mentioned dramatic reductions of disease in London related to the greater availability of farm products brought by rail) He likened “unbundling” of DSL as being like the steam engine hauling barges, not the best way to open up the internet technology.
  • Continued reliance on flawed business models.  He talked about the vertically integrated business model of telecom operators as flawed.  He went through the problem of figuring out who pays for the network and why vertical integration is bad.  “Local access is the mother of all battles  Who controls the intelligence in the networks, the network owners or the users? 
  • Free trade principles leading to open access has always been the way to win business.  (Singapore was the example he gave).  This is why we don’t want a lot of arbitrary tariffs and toll charges in the internet (Net Neutrality)
  • Cities and communities competing in the global market have to embrace the new technology models, not just the old ones.  Total annual telecom charge is $1.6 Trillion.  Amsterdam is investing on fibering all public housing units (for free) because they save money by allowing older people to stay in their home longer.  Another example in Denmark of a local entrepreneur who deployed open WiFi in a rural area.  The savings (by not paying telecom access charges) was dramatic.
  • It’s all about the creation and maintenance of relationships.  That’s the value – freely formed association with others.  The telecom industry doesn’t get it – the trade press is all about downloading content and protecting content.  The killer application isn’t content, it’s communication.  (Comment – two problems, one is nobody knows how to make money from anything but paid content, but the other is that even things that look like content, like ring tones or downloading clips, are really about communication.  People do it not because they want to listen to or watch something, but because doing so is necessary to stay connected and intelligent in their social networks.)

“Survival is not compulsory” (Demming).  He closed with a slide he says is 25 years old and comes from an item in the Economist, which talked about how companies still operate as though they inherited permanent control over their markets and customers, and they are powerless to keep them in the face of superior technologies or competition.

Discussion

$1.6 trillion today is based on the model of the network as a tollroad with charges.  If we go to a new model based on advertising, the total is 1/10th of that.  If that’s what happens, is there money to plan the next generation of innovations?

 

Imode is everyone’s favorite example, but it has largely collapsed and had no impact outside of Japan.

 

Question to Panel – should what should we do?

 

Baker – last century was all about putting copper in walls.  Copper hasn’t served tour needs well.  Fiber is cheaper, more bandwidth, and less subject to tampering and theft.   Bury more fiber.

 

Matson – take the money saved by not spending 1.6 trillion on infrastructure and put it into new things.  Problem is that big companies don’t know how to do it.  They are excellent at milking continued slow growth.  All the CEOs know there is a big crash coming at some point, but the strategy seems to be “cash out before the crash”, rather than figuring out how to embrace change and build a sustainable future.

 

Faster – all the CEOs they talk to are very worried about the change in the industry.  Operators are trying to remake themselves.  Example, FMC – France Telecom and Orange became Orange (mobile brand), Most of the senior executive s at Verizon are from the mobile company.  AT&T merged into Cingular and rebranded everything AT&T.  (Comment – actually I see this as an example of being blind to change.  The view was clearly AT&T was the more valuable brand, but this flies in the face of the shift to mobility.  Would public perception be different if the company had become Cingular?)

 

Question: (Rezza Jafari, chair of the ITU)  60% of worlds population has no access to communication.  We have a developed (western) world bias in our thinking. 

 

Mark – You are right but our industry is learning. A few years ago China asked for the remainder of the IPv4 address space to put their educational system on line.  ICANN said no, so China has largely gone IPv6.  He talked about challenges in the Muslim world where religious belief sometimes challenges our notion of common sense (example – he was questioned why provide WiFi in a women’s dorm).

 

Question: Chet McQuaide – Highways show “the tragedy of the commons”.  Open access, lots of capacity, but traffic jams make them unusable.  Won’t this happen to the internet? (Comment – in past discussions with Chet he has brought up the internet communities stand on Net Neutrality as being a problem for BellSouth, now AT&T several times.  The right not to be neutral is clearly a strongly held belief for him and/or the company..)

 

Mark – If there’s a need, the open market answer is to let people build to it.  Chet’s interpretation was that there would be a business opportunity for alternate “toll” highway, and in a separate conversation said that’s the Telecom view on why net neutrality is bad.  Telcos want the right to offer a two tier system, an open highway for one price and a “toll” highway that costs more (maybe paid by the web sites).  (Comment:  My response was that the trouble is just like the early days of regulated and unregulated services, nobody trusts them not to overcharge for their basic service to subsidize the toll service.)

 

Session 2A Business Models

This session was really a continuation of the themes of the keynotes, looking at the impacts of new business models for communication services.

Roberto Minerva (TI Labs) There is a Broker in the Net – Its name is Google.

Roberto has been a leader in services in TI Labs for many years (I worked with him in the Bell Labs/IN days.)  His paper was selected for one of 3 “Best Paper” awards.  He describes his role as changed, more looking at trends than nuts and bolts.  One warning to us – when we go into the new world, we are the new comers, Google is the incumbent.  They have been working in this for many years and know what is going on, we are the onlookers.  Roberto has been involved in looking at service brokering for many years (Comment Service brokering was a major theme of TINA-C, which Roberto and many other old timers in network services were involved in)  His contention is that Google is doing everything that people who envisioned service brokering for telecom did.

 

If you look for papers on Search engines you will find references from 1998, which is about the only published information from Google.  Things have changed (but not been published), so he has been exploring the details of what Google really does to better understand how they perform the brokering role

 

Google has been doubling revenue every 12 months path for several years (10.6 Billion in 2006), though slacking a bit in recent years. 

 

Virtually all of Google’s expenditures go into their data center.  In addition to searching, it supports caching, which gives them the ability to provide high performance high reliability access to web sites (Google will serve pages up out of its cache rather than the public internet).  (He said many say there are 2 internets, Google’s internet and the real internet, and Google’s is more important to most people)

 

Google’s page ranking algorithm is based on linkages – value of a page is dependent on how many links there are to it and where they come from.  The algorithm operates in a Web crawler that is distributed and replicated many time.  (Comment – the paper described this in more detail)

 

Estimates of Google’s data center run from 60K to more than 100K servers “Pizza box PCs with 1-2G of RAM and 80G of disks.  They use their own version of Linux,  a modified version of Redhat (mainly in the file system to allow management of bigger files, dividing them into chunks and replicating the file chunks.) “Bell Labs was the academy of telecom, Google is the academy of IP”.  Lots of the luminaries of Unix and distributed computing now work for Google.  (Comment – when I followed the operating system world more closely there was a clear migration of the experts in the field among the research labs of major companies whose market fortunes were high.  Xerox PARC and IBM started the trend, but as they faded people went first to Digital, then to Intel, then to Apple, and on to Microsoft.  Google is the latest stop along the trail)

 

Google data centers are very powerful and near (at least in net connectivity) to their customers.  He gave a couple of flow charts for Google’s advertising engine.  Lots isn’t published, it’s on Blogs.  Nobody outside of Google really knows exactly what is in the algorithms (That’s there value added and they are protecting it)

 

TI Labs has been monitoring Google releases recently.  TI Labs spent a lot of effort building their own services software based on SIP.  Google is doing it better for less (beta test, no guarantees, but it works).  His example was 7 people built their VoIP implementation, and he asked how many hundreds a traditional telco would employ to do it (lots). (Comment – though what I know about Googletalk doesn’t seem like a gigantic implementation path to me, Roberto has studied it harder and is genuinely impressed by an apparent breakthrough in productivity represented by doing it with only 7 people though again I think sociology and situation have more to do with the result than any specific technology)

 

Google forces users to identify themselves to use most services and builds a user profile on you.  Google reads your email and your SMS and uses it to tune the ads that it gives you for more value.  (He gave an example of sending emails on the death of a friend and getting ads for florists and other bereavement related businesses).  (Comment –he wasn’t happy about this, but  I’ve been wishing for years that someone would really understand what kinds of things I might buy and limit the advertising to those things, since now, I get tons of ads for things I have never bought and will never buy.  To do that they have to learn more about me, and I wonder how many people are willing give up some privacy for the sake of less junk showing up on their screens)

 

Final slide showed IMS architecture versus Google – both will deliver the same services in competition.  Google is much simpler –2-3 relays of HTTP messages for one call setup versus 48 SIP message exchanges in IMS according to one analysis.

 

Google is the broker in a lot of contexts – advertisers to users, users to services, access users to advertisers, etc.  (He described Google WiFi and the coming Google 700MHZ broadband service.

 

Google gives away all the services that telcos consider their bread and butter and charge for.    Service is free, but comes with ads, Advertisers pay.

 

Who can stop Google

  • P2P? – maybe, these services have a different value, and they are working on P2P search engines which may allow them to compete with Google.
  • New search technologies? (Maybe)

 

Google and operators – Cooperation or war.  Not clear how operators are responding to Google.  “It’s absurd to try to block access to services, shooting yourself in the foot because your users are the ones who get hurt and they will go elsewhere”.

 

Question:  The Google model of service is free and advertisers pay used to apply to radio and TV broadcasting, but now many things are available only if you pay.  Is advertising enough?

 

Roberto – have to get more aggressive.  If you are looking for the world cup match, Google would come back and say “we know you are looking for a car, if you are willing to watch a 5 minute advertisement for a car you can watch the match”.  Maybe too different levels of payment depending on what ads you see, and Google does have the structure to collect payment.

 

Question:  How do you compete with Skype?  Roberto – familiar with Skype, set up a relative with it, and she used it to nail up a connection between two offices they used for informal conferences all day long and loved it.  The point is Skype isn’t telephony.  In Italy most telephony is sold for bulk “all you can talk” rates and Skype isn’t used much for it, but it does get used for non-traditional calls (Comment – I don’t know how typical his description of Italy is.  I can buy cost effective “ALL YOU CAN TALK” plans in the US, but calling other countries is still expensive unless I use alternative carriers, so Skype is important and competing with both traditional and alternative carriers for my international calling)

 

Bernard Villian (Alcatel) The Meaning and Impact of Convergence

Bernard has been in the network intelligence business for many years.  He is a past chair of ICIN.  He now manages Wimax standards for Alcatel-Lucent.  He was presenting for another Lucent speaker who couldn’t travel for the meeting.

 

Convergence has many definitions:  Telecom/IT, Fixed/Mobile, Enterprise/Mobile, Media and Telecom, Voice/Data, Communications and consumer electronics.  He went through a lot of reasons why Convergence has value to carriers and customers (not much new here).

 

His message was basically that there is value in convergence and there are lots of issues in making it work (network efficiency, etc.).  The talk didn’t present anything surprising or very new, probably because he wasn’t the original author and was no longer working in this area.

 

Question – surprised you didn’t talk about billing.  (Answer – yes billing is important)

 

Michal Dunaj, (DT Labs), Opening Enablers to 3rd parties and Interplay with emerging business mobels.

 He started with a description of DT labs and their open services effort as basically being about how to bring new people into working with DT to add value to the network.  He described what is going on in the migration to IP as basically being “delayering”, reducing the layers and opening them up.  (Comment – that’s what should be happening, but I’ve seen way too many architectures that make IP MORE complicated with LESS openness)

 

He talked a lot about how to package network components into enablers that could be opened (presence, connectivity, etc.).  Enablers have to be open, allow combinations, be under user control, and support commercialization. 

 

Presence is a great example, but to become useful it has to have more cooperation (not just a silo).  Users have to have more control.  Need to go to many sources, then NGN as well.  Value rises sharply with what can be provided and how good it is.

 

One way to do this is to create a wholesale model – telco’s sell to an aggregator of service enablers ( “Enableco”), which federates all their enablers and sells them to companies which build the services. (Comment – in many areas this indeed would be useful to allow a user to have a choice of mobile carrier, fixed operator, and internet provider(s), and get unified services.  In the early days of IN/IP convergence, my team prototyped “internet call management”, allowing you to manage calls made to one endpoint from an IP client – for example, monitor calls made to your home phone while at work).  I quickly realized that my home phone service was provided by a different company than the one that served my office, and this would never happen unless the two cooperated somehow.  DT Labs is proposing a different model that may work where each operator exports what they want to provide (e.g. the IN trigger points needed to detect and control the call) and a 3rd party then aggregates them and provides them to a service provider)

 

Session 3A SDP and SOA

 

Anders Lunquist (BEA Systems) chaired the session and talked about the changing role of service delivery environments in next generation networks.

Jose Mareno (DT Labs) The Akogrimo way towards an extended IMS architecture.

 

IMS is derived from IP and is in the right direction, but it is only focused on SIP.  Akogrimo adds other kinds of applications.  Their particular focus seems to be on supporting the GRID environment, the ability to share services and capabilities across many different endpoints.  The network operator/IMS is the broker that helps get people together.

 

One problem is that GRID environments are based on SOAP, not SIP, so how does IMS manage SOAP sessions?  One approach is SOAP over SIP, another (theirs) is to use SIP to set up a SOAP “session”, using special SDP to describe the session.  He uses the SIP registrar to register SOAP services basically so that one can use normal IMS location services to locate appropriate services.

 

(Comment – I had high expectations for this paper, but in fact found the material extremely confusing, probably because I don’t have the detailed background in Web services and GRID environments that the presentation and paper built on.)

 

Question:  The idea here seems to be to add things to IMS networks that can’t be done by SIP.  Give an example.  The example given was any service based on web services involving multiple parties.  The interactions between parties are all done with Web Services, not SIP, so IMS doesn’t help – it’s oriented to setting up sessions and there aren’t sessions here.  Akorgrimo extends IMS to set up web services “associations” among the parties.  (Comment – I think I understand this, but it’s subtle.  I have to wonder whether we aren’t using different language to describe the same things).  “We don’t have enough imagination to envision the new services, but building a structure that can put enablers together to allow new services to emerge has to be good”.

 

There was a lot of discussion on why IMS would or wouldn’t support SOAP, and whether GRID was standard Web Services or not.  Discussion was cut off for time.

 

Sungcheol Park (KT) KT’s IN Service migration to NGN based on SOA

KT has more than 20 IN services deployed in the network today.  Many customers have several services they subscribe to.  Their customers want converged services, which requires more development, and they want to preserve those services across technologies.

 

They are looking to build new services around an IN Services Gateway.  Service platforms connect via SOA/XML to INSG, which mediates connections between services to allow one service to use another (e.g. Call notice server accessing SMS server to send a notice).  The INSG also manages global profile data so that each service has data in the common profile database. 

 

He described several call scenarios.  Then presented an example of a limitation:  Two call control services that want to share the same trigger (Comment -- He needs what we call “TRIM” Trigger Interaction Management) 

 

A second limit is trying to access the database which holds service data – usually the key for the data is the calling party number or called party number, but some services (Personal number) apply to neither.  No service can access any other service’s data.

 

His view of the alternative is an intelligent broker system with an SDP.

Berndt Kapong (Sun) Agile Service Assembly – SOA in Service Delivery Frameworks

He is the chief technologist for Sun’s SDP effort in their media technology area (not sure exactly what that means).  His example for “why use SOA” is Facebook.   Over 5400 Services have been built in 3 months based on an API, query and markup language.  (900 were built in the last 2 weeks!) The applications distribute “virally” among network of connected users and drive a great deal of advertising and transaction revenue.  Facebook got 4M new users in 3 weeks.  This is what the real next generation service competition is, Telecom has to understand how to play in an environment with agile competitors like Facebook.

 

Many telecom operators have chosen to sit aside from efforts to define SDP for many reasons.  But, this is a key issue for them.  Content is NOT king, it’s <10% of communication revenue.  Communication is a much larger market.  Operators won’t win by capturing content while losing communication business.  SOA/SDP is a way for operators to look at creating better structures to empower service development.

 

SOA is about understanding your business practices and making your services follow them.  The paper gives an “Eye Chart” of best principles for engineering business practices.  The paper focuses on service delivery and service creation. 

 

He argued that Telecom does need some special low latency environments that may not apply SOA as much:  SLEE, SCIM, and some other real-time components may need to differ from the platforms used for Web and enterprise services, but networks should be architected to keep these special needs platforms to a minimum so most software is built in more common environments.

 

One aspect of Service Creation is Orchestration.  This is ideally done with scripting in Web Services using WS-BPEL.  This is great for quick prototyping. You put service enablers together into process flows, and if you have the right service enablers you can cover a lot of services.  Web2.0 “Mash ups” are similar, but often based on different composition rules.  Most are client driven (e.g. Ajax, widgets, etc.)  (Comment – this is a lot like building telecom services out of service building blocks  experience in Telecom has been that it works well with a very constrained service domain (e.g. routing rules), but isn’t flexible enough to do anything really new)

 

Service Creation is only part of the problem.  We have a good handle on that, what we aren’t good at is service management – deployment, provisioning, assurance, usage/charging.  We have to create manageable services, not just new services.  Composition makes this harder because services then have dependencies.  (Comment-  I believe that a large part of the problem people have maintaining personal computers is the complex and poorly understood dependencies of applications on shared libraries (DLLs) and the registry.  It would be a disaster to recreate that experience in telecom!)

 

Concluding – the industry needs to radically rethink how we build and deploy services and what platforms we use if we want to compete against the “new” service providers out there like Facebook.  Service creation, service orchestration, and execution are beginning to be understood, but we aren’t good at service management. 

 

Question (Chet McQuaide) – Is the operator OSS/BSS structure an advantage or a roadblock. Not sure which.  If done right it can be a big advantage.

 

Question (Warren Montgomery)   SOA and other things we have heard about create new dependencies – services built from components and shared profile data.  This is good for productivity and cost, but many of the problems users have with PCs come from poorly understood dependencies on shared libraries, drivers, and databases.  In the past Telecom avoided this by sharing little and exhaustively testing.  How do we insure we don’t introduce problems in building next generation SDPs.  (No real answer to this other than you can do re-use well and have to provide incentives to people to do it.)  (Comment – several people approached me after the session and mentioned that this was an interesting question.  After almost a full day of hearing speakers talk about the superior service creation capabilities of the IT world we need to understand that Telecom has done some things well and that the flexibility and speed in IT don’t come without new risks and costs.)

Session 3B Service Composition

Dave Mansutti (HP)  A case study on the convergence of Web Services and IMS

His study involved Instant group communication (IM) – why IM?  It has been a big success in the web, and there are good opportunities for composing services.

 

He went through some work they did by using web services to build applications mainly on client systems working with IMS based servers.  Portability is a hard problem, as is access to all the capabilities of the client.  Complex services need more than Web Services alone.

 

Web Services do apply to the whole architecture – they are useful on the client as well as the servers.  They did believe that by exposing the right interfaces they could enhance services through composition.

 

Question (Michael Trowler, BT) – what’s the right level of APIs?  Good question.  There is a tradeoff between being complete and being too complex.  Parlay is good for building communication applications (but too complicated and low level for those looking just to add communication as part of other services)  His view is if you want to focus on a particular area like group communications, a high level focused API is best.

Rick Hull (Bell Labs) Data Types --  Simple Management of the Life Cycle of Session Oriented Services

Rick is a database specialist who was formerly a professor at USC.  (One of his co-authors is Al Aho, noted and now retired algorithms and data structures specialist and author of several classic texts)  (Comment – I remember his name, and believe that he worked in one of the organizations applying formal mathematics to protocols and communication programming)

 

Most service blending has been about exposing information from a network and using it in a transaction.  What they were looking at was “shared experience communication (sessions) and managing the lifecycle of sessions.

 

Theirs was a “Greenfield” effort, looking at APIs for a new field, and he suggests that Parlay may learn from this.

 

Example – family portal – “always on” group conference for a family with access control to join.  He said that ParlayX couldn’t support seamless upgrade from 2 party to 3 party conference (Comment – I don’t know, but I don’t think ParlayX really addresses conferencing).  There may be controls on whether new users can join, and the ability to bridge in additional devices and media types.

 

Overall architecture used a WSDL over Soap/HTTP to a session manager that controlled the actual services, and to web servers providing the user interfaces. 

 

Sessions have one or more bubble, each bubble has parties and media.  They have a rich predicate language that is used to specify events and notification so that information can be filtered coming up.  The bubbles use state machines to describe the operation of the session.  (For example, each participant will have a machine describing what whether the participant is invited, joined, listen only, speak only, etc.)

 

A lot of the detail of making the session work (escalation from 2 to 3 party, hangup, etc.) is taken care of by the underlying model.

 

Question:  How does this compare to programming in IN?  Answer – joined too late to have expertise in IN, but he felt their model was very simple and intuitive. (Comment – One of the problems with IN was that there was no standard programming model.  Each vendor developed their own.  Some gave very simple models for simple services, but only the most advanced of the IN standards addressed multi-party calls and that wasn’t extensively deployed, so services like “Family Portal” would almost certainly have been done with ad hoc programming on a service node.

 

Villey Jean Christian (Orange)  Monolithic applications versus enabler functional splitting – an approach illustrated by the call log example.

He is an IN specialist, and was involved in a joint venture between Sprint and Orange on VoIP that is now in their Equant network.  He has been involved in a major deployment of dual mode service for Orange.

 

What are good criteria for deciding on re-usable functions?  They have to be economic in nature:

  • “Banality” – must be suited for mass adoption, either common today or in the future
  • Simplicity – Must be easy to use and integrate
  • Traffic Optimization – Has to be compatible with efficient call/service flows (example was that there are ways to do presence that result in way too many messages).

 

(Comment – as someone who worked on re-use in telecommunications for some time I think these are quite a reasonable set of criteria.  Unfortunately they require both experience (“banality” is often not apparent until many services using the function have been built) and in depth knowledge (Traffic optimization isn’t always obvious), which is probably why picking good abstractions remains a difficult problem.)

 

Is a Call Log a good candidate? 

  • Is it relevant to do in the network given all terminals do it?  Yes!  Network log covers many terminals and terminals that aren’t on the network so don’t receive the call.
  • Is it a good candidate by the criteria?  Very classical concept (Banality), simple, and traffic optimization isn’t really a factor, so yes.

 

What are criteria for success:  Cost – not a mission critical service, so not worth spending a lot to make it absolutely available.  “Make it “multi”-- make it apply to as many networks and devices as possible.  The whole idea is to let you see the state of your calls from as many places as possible.

 

He talked about how to integrate the call log with other services – enablers may operate at many levels – protocols, Parlay call control, and services.  Call Logs really belong as a service level enabler.  Note that Orange had a demonstration of this service which is described under “Poster Session” in this report.

Simon Dutowskie (Fokus) Multi-access Modular Service Framework – Supporting SMEs with an innovative service creation toolkit based on the integrated SDP/IMS infrastructure

Their work focused on developing services for small/medium enterprises in order to determine the appropriate framework for introducing services using SOA.  The lab used a service creation workbench that composed service building blocks.  The intent is to enable end-user service creation from standard services. 

 

The example used was a location based service where a customer entering a designated area gets some kind of SMS (Comment – interesting, in the US this would be considered SPAM, in part because the receiver pays for the SMS, like getting junk fax calls which waste the resources in your machine) Another example involved streaming video from a device at a soccer match to a central server that would then distribute it to your buddies.  (Comment – wouldn’t this be considered piracy if the match in question had any kind of broadcast licensing agreement limiting distribution?)

 

They used Parlay X 2.1 as an interface to deal with the network which provided much of what they needed (But of course they found it didn’t do everything, and had to extend it!)

 

The prototype ran on the Open Source IMS Core (described in Monday’s tutorial).  It also used an IMS client framework.  Feels that the importance of clients in IMS is underestimated – hard to have one client that is capable of supporting all services, so they built a framework and customize their clients.  (Comment – I can understand this.  Given multi media and communication types clients have gotten too complex to assume you can support anything from one)

 

Question (University person) – Paper says you use Parlay X with Policies.  What are the Policies?  The worked use Policies from the OMA service framework and approach to identity management.  Also used BPL based composition of service building blocks.  They used Parlay based access control.

 

Question (Dave Ludlam) – Comment on the fact that Parlay was first introduced in 1998 and we are still discussing call models, vs. what is happening in the internet space.

 

Parlay is responding to the problems with conferencing, call merger, and multimedia (Comment, Yes, but the response has been SLOW)  Lots of discussion resulted.  Rick Hull described various carriers’ approaches:  AT&T Wireless has a Parlay deployment mainly for data and mainly used in partnership with companies wanting to interface to send/receive SMS.  BT introduced Parlay (But in a later conversation a BT person said that they have withdrawn the interface.  In a followup with Dave and others, we outlined some reasons why the internet community has a much easier time:

 

  • No standards process – the industry leader just does it.  Facebook just decided to put out their APIs
  • No money changes hands.  This means less concern about identity/security, and less expectations on the part of the user so it’s easier to just put something out.
  • Efforts often involve only one company, no need for interworking.  (If an internet service succeeds, others will build to it)
  • No regulation to worry about appropriate interfaces for competition.

 

In other words the problem isn’t the technology, it’s the environment in which it operates.  (Comment – If I follow this argument it suggests  that the telecom industry is doomed.  There is no way that it will ever be able to be as agile as industry not saddled with the same environment of operations and over time all new services will migrate to the unregulated, paid by advertising internet and telecom will be left with providing basic service, which it does quite well.)

 

Session 4A (Service Brokering)

Chet McQuaide (AT&T) Service Brokering Opportunities and Challenges:

 

Chet’s paper came from the former BellSouth company joint with his team.  He presented an IMS view of a common real-time services architecture.  It’s applications on top, Web and IMS in the middle, and LOTS of networks and access on the bottom. (Comment – if I put on the “what would Facebook do” filter for this I’d say this picture is WAY too complicated.  Why make it work for all access forms and interwork with everyone if my customers are all on IP?)

 

He showed the 3GPP IMS picture we all know and asked why there was no service broker that allows combination of applications from various sources including Parlay and CAMEL.  SCIM is just a few lines of description in 3GPP.  There is discussion of trying to do this in 3GPP.  Chet repeated a comment from his co author (Nick Hlujack) on the observation that our industry moves at a glacial pace:  “Glaciers are the only rivers which can move forward but still retreat (due to global warming)  (Comment – This may be one of the best summaries of the industry I have heard J )

 

Chet made the point that you really need brokering among services even in a limited domain – if 1 leg of Simultaneous ringing services has Voicemail, that voicemail may capture all the calls.  (Comment – yes, this is why service brokering is essential and also why it’s not obvious how to do it in a general way)

 

Chet illustrated the 3GPP Filtering rules for SCIM – allow simple selection rules based on identity, message, and direction, but basically very inadequate to capture all of them.

 

Two problems for service brokering – the “real time” problem, mostly what we address, and the “operations” problem of subscription, billing, packaging, etc.  There are vendors who address this and it’s important, but not addressed here.

 

What are the requirements?  Standard, Programmable at design time, low overhead (bypass when needed, but not simple to do this because you want to be able to add it dynamically without re-entry of data), Address the whole scope, operate in various service architecture.

 

Some architecture slides – shows a SIP/IMS structure with SIP applications and a Web application stack, with some common enablers that feed both, but you also need a broker that can work with both and interface with OSS/BSS to enable converged applications.  He put up a chart with a data description for all of the data behind a service broker without explaining much (in the paper).

 

Warren Montgomery (Personeta)  TRIM – new tricks for old networks

My talk seemed well received.  I covered motivation for TRIM, some call flows, evolution to IMS, and some practical implementation issues.

 

Question (Rick Hull) – can you describe TRIM brokering rules with a common language?  Response – mostly I believe you can do it using XML rules, but there will always be exceptions where you need deep protocol access and service knowledge.

 

Question (J Hartog) – will the brokering be as complex as the service logic?  Answer – maybe, but the real value of saving the current implementation is its integration in service management and the preservation of the user experience.

J Hartog (Ericsson) – Converged Service Environment

 

Not all convergence is going at the same rate – The core is going to IMS faster than access.  That means you have to bridge architectures during transition.  (Comment – we talked a lot during breakfast as well.  His focus has been on delivering IMS services into CAMEL networks while many have looked at the converse problem (CAMEL services to IMS))

 

His network picture showed CAMEL and SIP-ISC architectures – without coordination you have separate services and no guarantee of consistency.  You have to duplicate services. 

 

Two architectures for convergence – one based on access network – triggering occurs in the network where service is requested.  Service platform has to deal with both CAMEL and SIP as a result. (Comment – this is the situation I see most people addressing) 

  • Minimizes transmission and signaling (no tromboning)
  • Service protocol is tailored to the network.
  • Must support tow signaling protocols
  • Dual service control may impair consistency? (Assumption seemed to be two different implementations in the service platform)
  • Adds complexity (two state machines).  You could solve this by abstracting to a common one, but abstracting the differences loses functionality.

 

The second architecture triggers everything from the SIP/IMS core and routes everything up to it. (He talked about this being the model for VCC, the standard for fixed/mobile convergence service which allows a mobile phone to reconnect to a WiFi network and reroute a call in progress while preserving “Voice Call Continuity”)

  • Single service logic and state model  (He felt this was a big advantage if services are built by 3rd parties.  Others have observed that 3rd party service creation almost never happens though)
  • More consistent for the user
  • IMS supports multiple terminals (not clear how to do it in CAMEL.
  • Cost for overlay and transport into the core (May not be a big problem if IMS handles the interface without transporting the media)
  • Information lost in DTAP and CAMEL is lost (may not be a problem if the CAMEL and DTAP information is encapsulated and sent to the IMS network (Comment – but that makes it complicated again)
  • Complexity added to the edge of IMS (Need for translation)

 

Some examples:  Dual phone use. 

 

Multi access phone (Wireless LAN HSPA, and GSM)  He talked about the UMA approach.  He went through some specific examples of call migration from one network to another with each of the two strategies.  Doing this with access triggering the big problem is synchronizing Camel triggering with the existing call when the call migrates into CAMEL.  It’s a bit easier when the call is anchored in the IMS network and services are triggered in the IMS.  (Very complicated examples)

 

Access triggering is preferred when existing services are extended or when existing services get data enrichments.

 

IMS triggering is preferred when new services are designed or when converged services are, or multi-access phones are used.

 

Questions:  Is the IMS triggering real, or is it theoretical.  Any practical experience on triggering CAMEL services into SIP?  It sounds nice, but does it work?  His answer is that yes, there are examples and a lot of standardized mapping for putting CAMEL information into SIP including location mapping and all the details.  (Comment, but has anyone actually built it?)

 

Question:  Have you looked at the TRIM/SCIM architecture, why not?  Each has a use (Comment – I believe it depends on what you want to preserve and extend.  TRIM/SCIM is about extending the existing services into the future.  IMS is about new stuff only)

 

Huilan Lu (Lucent, presenting or Michale Brenner) on standards for Service delivery.)

 

She presented a long exploration of the various standards for SDE and SDP, really focusing on the OMA view of service creation.  Nothing exceptional in the talk.

 

Discussion:  I had a lot of followup discussions with various people on service brokering and service triggering.  One big question is how the TRIM/SCIM/IM-SSF view fits with the two views Ericsson presented on brokering.  It seems an intermediate approach (trigger in one network, and potentially serve to the other only when needed).  I agree, though I think the real determinant of which approach you want for service brokering depends on what you are trying to preserve in your network migration.  Those with a large investment in IN services looking to move forward will prefer the TRIM/SCIM/IM-SSF approach, while those with little invested may want to start clean in IMS.

 

Session 5B, Service Enablers

Risoto Mononen (Nokia) Location information and mashups with mobile web server.

 

What they have is a simple open source (linux) web server running on a mobile device that communicates mobile information back to the network.  He has a demo he hopes to be able to show.  The mobile web server (Raccoon) makes content on the mobile phone available to a mapping server, which then combines information from Google maps to deliver pages back to whatever mobile requests it.

 

The server registers all users (some outside of Nokia) He described the implementation, basically using a web services interface into Google Maps to translate the GPS data from the handset into a location.  Other functions that would be available if he were connected include the ability to activate the camera on the phone, send SMS message

 

(Comment – After several false starts he got it up and I could access the server from my laptop using Wifi access.   I could  see locations for 23 phones, only 3 of them on line.  Eventually I  got to his phone, and could get a picture of the conference room with the phone.)

 

They have a few hundred users of this and draw some conclusions:  Location is valuable, Google maps are very nice.  Big problem is battery consumption – the phone burns its battery responding to location requests and running the browser. 

 

The main point here though seemed to be to illustrate a new kind of service that could be created as a “mash up” using the facilities available. 

 

Question:  What service ideas based on location do you have?  Finding routes, finding things near buy,  Location driven advertising.  (Comment – again, why isn’t advertising pushed at a phone, triggered by location considered SPAM?)

 

Question:  Can I sign up for this?  Yes, you can download the software from http://raccoon.openlaboratory.net/RaccoonOnMap/RaccoonOnMap.html.  (Comment – I’m sure you need the right kind of phone and the ability to download new software, which may not be commonly available, but it’s interesting.)

 

David Vannucci (U WItwatersand) Extended Call Control Telecom Web Service

ICIN has attracted papers from this university for many years.  They generally are good technical papers.  I believe the interest stems from an IEEE conference on IN held in South Africa in the late 1990s.

 

Currently there are two approaches to service creation:  Full Parlay, and Web Services.  With Parlay the main problem is complexity (registration, etc.)   He talked about the Parlay X service and what is available and what isn’t to introduce the need for extended call control.  They followed an ECMA view of call control.

 

He gave an example of a virtual PBX delivering to Mobile endpoints using Parlay extended call control.  He went through a couple of call flow examples (very detailed) on how calls move through the states in the extended call control model.  (Comment – this stuff is basically IN all over again, a stateful call model and triggering and transitions)

 

Question:  So what’s different?  They used a different approach to determining what’s required and arrived at about the same place.  The implementation isn’t tied to the API or a particular network, you could connect to Asterisk, mobile network, fixed number, or anything else.

 

Claudio Venezia (Telecom Italia Lab)  Improve ubiquitous Web applications with Context Awareness.

 

(Comment – I worked with him in the past and especially his absent co-author, Carlo Licciardi, during a time when Lucent and TI Labs had a joint program in IN.  Claudio was heavily involved in JAIN SLEE at one point.)

 

He described some standards or consortium efforts aimed at mobile web:

  • W3C Mobile web initiative – produce tools and guidelines or building mobile web friendly web sites.  Create a network of vendors to overcome technical and tactical issues.
  • W3C rich web client activity – standardize libraries browsers to support rich web applications.  Standardize widgets for mobile devices.
  • W3C Semantic web.  Define technologies to represent the semantic content of web pages to enable easier location and programmatic use (OWL is an example)

 

He described an effort that they had to put context around content – uploaded mobile pictures, to create a more attractive format for sharing information and drive increased use.  Called Teamlife, it allows you to put photos on a map, group them, use the location information from phones.  All this was done with Web2.0 techniques.  (Comment – I don’t think I saw much in it that wasn’t in Google Earth and their related photo placement sites)

 

Session 6A New Business Models II

Horst Thieme (Sun Microsystems) chaired this session and begun with a recap on how many people have predicted the death of Telecommunications, but that business models like this will allow telecom operators to “fight back”.

 

Do van Thieu (LINUS)  Business requirements for IMS.

 

LINUS is I believe a market research organization in Norway.  What they have been doing is studying IMS deployments and understanding what business need they meet, not what the technology drivers are.  It’s almost a year old now.  They included SIP based NGN in addition to IMS.

 

  • BT (21CN) – it’s an IP core plus a MVNO approach to offering mobile services (No legacy mobile network). 
  • Cingular(ATT)  Driven to get unified network architecture (IP based services of various types).  Their IMS is to roll out with HSPDA.
  • TI  Mobile – the driver was video sharing service.  Nokia is the vendor and it is limited to one handset from Nokia, mostly trial
  • Telefonica – IP services with rich communication, looking for additional revenue and cost benefits from shared infrastructure. 
  • Verizon – looking at IMS as an integration platform that includes both SIP and Non-SIP based services (Comment – not sure what this means, SS7 + IMS or something else)

 

Fixed and broadband operators for IMS looking for cost effective means for Internet style applications and a path to convert legacy to IP.  FMC and penetration of the mobile market are motivations for some.


Pricing models (BT Fusion) – 5 pence for UK fixed calls, 15 pence for calls to BT mobile, 25 pence for UK mobiles (Comment – YIKES!  Given 5 pence = 10 cents, this sounds very high.  Maybe I missed something about what this was for)

 

He went through a lot of other requirements focused on users and operators.  Basically I didn’t see a lot of surprises in any of these.  One was interesting.  Mobile Operators mostly aren’t mentioning IMS – it’s an implementation detail, not a requirement.  They focus on services.

Heirich Arnold (DT Labs) Enterprise Architecture and Modularization in Telco R&D as a Response to an Environment of Technology Uncertainty

 

DT has been trying to look more and more outside to see the future.  It’s difficult now since there are thousands of companies building applications.  He views that for the first time telecom has the opportunity to benefit from Moore’s law because IMS allows IT servers, which track Moore’s law, to be used as a platform for building services.) (Comment – I have always felt that Moore’s law has a funny characteristic – it doesn’t tell you whether the gain will be lower cost or better performance.  What happens in practice is that applications take the gain in lower cost of hardware up until a single chip will run the application, and only then does the application start to benefit from the performance growth.  What this says is that big applications, like switches, and  especially applications that are late adopters of technology, will make the cross over later than little applications, like personal computers, which crossed over 10-15 years ago.)

 

Recent challenges --  The Web is a natural bypass of telco’s R&D – lowers the entry barrier.  Radical challenge – new players change the landscape through decoupling. 

We are in a period of experimentation – it’s not clear what will happen.

 

Telco R&D has lost control of the evolution of the network – more players and new entrants have more influence.  The right response is to focus on who what really needs to be in the operator network.  Have to support interoperability with legacy and NGN, and focus.

 

He looked at three trends in society and implications for needs: 

  • Move towards Free Market,
  • Move towards society/community, and
  • Increased Individualism. 

Then they looked at different building blocks for services and how they would support a person responding to each of these trends in order to determine which ones support all approaches – those things are clearly useful in multiple scenarios.

 

He got a bunch of questions on how their process was working.  They feel it is valuable and has helped them separate the applications and technologies that require telco knowledge and are not duplicated outside (the things they should work on) from those which are available and better done by others.

Don-Sung Oh (ETRI – Korea) Design of Next Generation Mobile Convergence Services Business Model.

 

Next Generation Mobile – beyond 3G/4G – is defined as “Ubiquitous Broadband Service”.   Next generation services include both convergence, and “human centric services”.

 

He described their thoughts on how to get there based on a scenario driven development process.  The develop and refine scenarios for how the service should work, then decompose them and analyze each one for each situation, and build to the scenarios.

 

He had an example starting from a written description of what would happen, then to a more structured text description, then decomposition into situations (places/times where things happen)  (Comment – this is not unlike use case driven design of Object Oriented Systems)

 

The pieces of the scenario were mapped onto different “actors” who participate in the delivery of the service, then given behavior needed to deliver it.  They then map money flow and responsibilities.  (Comment – Not to criticize this work, but this is getting too hard and too structured.  While it is a nice structured development process, it reflects thinking about roles and domains from a “monopoly”, kind of mindset.  The Web2.0 companies wouldn’t do it this way!)

 

Question (Horst Thieme) – So, what is the one service you would pick?

(Didn’t answer that, gave a time frame for their work that said they would pick the services later and then begin work on a service platform to deliver them.  (Comment -- Again, this is a typical “plan then build” kind of process we are all familiar with, I think today’s world would do it differently.)

 

Question (Horst) – to Heinrich – so, how do you keep the work on services relevant to the development?  (Answer – involve the product managers early!)

 

Session 7B  Service Artchitecture 2

I arrived at the end of the first presentation because of a late evening the night before.

 

Werner Dittmann (Siemens) “Where is the intelligence”

I arrived too late for the all but the conclusions, but the conclusions were interesting.  He was basically stating that intelligence will move rapidly to the devices and the edges in next generation network.  During the question period he said the transition would happen much more rapidly in fixed network (Cited BT’s 21st century network as an example.)  His message seemed to be that there is no future in SS7 based IN.

 

Hans Einsielder (Deutsch Telecom) FMC service provisioning based on IMS using privacy and federation concepts

He presented their architecture called “ScaleNet”.  This is an IMS based network which has been extended with AAA Security and Mobility building blocks that get combined to implement the FMC service.

 

The work is part of a European Commission project called Daidalos, which is aimed at understanding communication beyond 3G.  (Comment – I’ve always found it interesting how much communication research is funded by governments and inter-government agencies like this in Europe, vs. in the US where projects like this are very hard to fund.  A few months back I spoke with some of the people at MIT I worked with as a graduate student who lamented the fact that it is difficult to get that kind of funding even for a large communication and networking project entirely within the university the way they could in the Arpanet and Multics days.)

 

One of the concepts he introduced was the notion of operating with a virtual ID to insure privacy.  Your real identity is known only by a “trust center”, usually a trusted operator, who hands out a virtual ID that can be used to obtain services from an “untrusted” service provider.  His example was someone known to a mobile operator who wishes to use services from an untrusted access provider.

 

Federation is then used to extend these IDs across networks.  Making all of this work will require minor extensions of the ISS and HSS interfaces (Comment – not sure what extensions were required, but it always bothers me a bit when something which seems relatively minor requires extensions in core protocols)

 

Question:  Can you elaborate on Virtual ID and how they are federated and what extensions are really needed?  Answer:  Virtual ID is used by the service provider you want service from in order to authorize you and charge you.  Federation allows that information to be used in multiple networks and allows the charging/authorization information from your personal profile to populate the virtual ID and charging information to return, without exposing personal information from your personal profile.   Followup – isn’t this just what the Liberty Alliance is doing?  (I think the questioner was from Sun)  His answer was that Liberty Alliance was focused on Enterprise (Comment – yes, but I thought the concepts were general)

 

Ernc Kovacs (NEC)  Creating Converged Services for IMS using the SPICE service platform.

SPICE is a joint project between Nokia/Siemens and NEC, and the speaker is actually from a German research laboratory of NEC. 

 

What is a converged service?  It’s a moving target.  Convergence of transport (IP and Circuit) was what that meant before.  Now it may mean convergence of private (home) network with public networks, multi-media, etc.  SPICE is another big EU supported project with many partners.  The aim seems to be to support 3rd party service providers providing services across multiple networks using service enablers in each network.  The focus is on increasing service intelligence using context awareness and “knowledge processing  (I believe knowledge processing really represents drawing conclusions about what a user or service may need through context information like where the service is being requested and the profile of the requestor)

 

The architecture they use has a layering which includes:  Capabilities and enablers, Component services “knowledge layer”, and Value added services.  Both terminals and networks have these layers.  Networks have an exposure and mediation layer on top of all of this that exposes interfaces to 3rd party service providers (on a separate platform).

 

The work is based on the Open IMS open source implementation, which interacts with some of the participant’s IMS based network.  Their Service execution environment includes a JSLEE based application environment with interfaces to J2EE supporting non-real time services. 

 

The terminal service execution environment is implemented on Windows Mobile, and supports a “desktop” environment with widgets used to implement the pieces.  The knowledge layer uses a knowledge base in each device/server which connects into a set of distributed ontologies that organize knowledge about various aspects – identity, security, capabilities, etc.

 

Service creation is available at different levels.  For the end user it is quite limited – basically responses to particular triggering events (Not sure if the events can be extended)

 

He gave an example that would make automatically find a restaurant when the user was in an area at dinner time.  Interestingly enough they had a problem that they were using a Google interface for calendaring, and a few days before a critical demo the interface changed.  He correctly expressed that this is a big challenge in building Web2.0 services: Unless you have some kind of contract with the service you are using it can change unpredictably and your service breaks as a result. (Comment – this is essentially the broken windows software problem I suggested in an earlier comment.)

 

Enablers for converged services (Converged in this case meant working across networks)

  • Converged Identity management is needed.  They base it largely on Liberty Alliance
  • Converged presence management – uses a gateway to provide presence across networks.  (He had a very complicated presence gateway, said that they are not happy with the number of gateways in their architecture).
  • Service Roaming (full).  The goal is to be able to get the service you want ported to run locally in the serving domain, which means reconstituting it to use the local services and resources available in the domain you are visiting (versus allowing the service to run entirely in your home domain)

 

Question – are you planning to operate a real network across Europe this way?  Answer – yes, they have just set up a testbed interconnecting a couple of labs (didn’t catch which two) and plan to use this architecture.  The questioner asked about “panlab”, apparently an existing international laboratory effort.

 

Question (Telecom Italia) They are also a participant in Spice, and asked how Spice is doing in influencing standards.  Answer – some companies are pushing the results into standards (example, presence for IPTV). 

 

Question – looks like you are doing “major damage” to the structure of IMS, what’s the prospect for success (question was about service mobility to support “full” roaming.)  Answer – they are more interested in what the Web community does and he didn’t seem to care whether it fit the existing paradigm for roaming or not.  Felt that Telecom will have to fall in line with what the internet does here.

 

Session 8A – Mobile TV/ IPTV

Chaired by Kevin Fogarty, ex BT who publishes a telecom journal now

Michael Said (France Telecom/Orange)  Delivering Quadruple play with IPTV over IMS.

Michael is the FT representative to TiSpan in this area.  His co-author, Bruno Chartres, is one of FT’s key service architects and a long term participant in ICIN

 

Why use IMS?  It’s a common infrastructure for communication and delivery, common authentication, and some ability to use resource management from IMS (though IPTV related resources have to be managed in the IPTV platform, transport can be shared).

 

The scope of what they want to do includes both Fixed and Mobile delivery of IPTV and uses a common charging mechanism. There is work in standards in this:  ETSI-Tispan has been working on standards for IMS and has been working on integration of IPTV into the IMS architecture.

 

The extensions come in the core to implement a “streaming session” representing IPTV delivery, and to the media framework to implement storage for media.  Much of the rest is standard IMS (service discovery, authentication, etc.)  Setup of an IPTV session uses standard SIP with specialized SDP.  Negotiation of SDP handles things like rights checking and choice of media (coding, rate, etc.)  The control of the stream goes end-to end via RTSP (doesn’t impact the IMS core).

 

Broadcast presents some new challenges since the broadcast session is set up by the broadcaster which negotiates the transport resources, then subscribers request the ability to join the session.  Again, SIP/SDP is used for setup.  Control of the stream uses IGMP (No clue what this protocol is).  The main impact on IMS is the need for the ability to reserve the transport resources in the access network to handle multicast distribution of the broadcast channels.  He showed call flows starting with the user equipment contacting service control.

 

He talked briefly about IPTV service combining broadcast and streaming, such as the ability to pause or go back in a broadcast stream using network resources to buffer the video.  He also talked about adding normal IMS services on top of IPTV and the ability to use TV as another media type in a multimedia session.

 

The merger of IPTV and IMS can add value to both and has the potential to help drive adoption of IMS.

 

Question (Ove Faergemand)  What happens when you push a button on your remote control?  Does that get mapped into general messaging for IMS?  (No answer, but I believe what happens is that these commands get sent end to end)

 

Question (alcatel-lucent)  How does this relate to fixed/mobile convergence?  He gave an example of streaming an encrypted session to mobile devices and said it needs no interference from IMS.  Answer:  Having a session associated with broadcast gives you a way to manage transport resources.  Some operators (FT as an example) have insisted that if a user doesn’t subscribe to a stream that stream cannot be delivered to him, even if it is encrypted and he can’t read it without the key.

 

Question (Chet McQuaide) Which services combining IPTV and other applications have been implemented (things like call forwarding, click to dial, session transfer between terminals).  They have implemented the ability to move sessions from fixed to mobile and between rooms (Comment – yes, I think this is a case where you will really need “VCC”, since you can’t just connect again and find the right place to begin, and doing so would raise questions about whether it’s the same session for billing purposes)  He also talked about some form of parental access control.

 

Jairo Esteban (Alcatel/Lucent)  IMS and IPTV Service Blending – Lessons and Opportunities

 

Service blending means more than just bundling – SIP influencing IPTV and vice versa.  The example they studied was TV Alert to incoming call and interaction with it via your TV remote to control the call.  (Comment – TV alert is an old service, and the cable providers are either doing it or have it planned.  Control via a TV remote significantly complicates this over just providing an alert that you then can pick up with a normal phone.  I have to wonder whether that’s really what people want to do, since you can’t talk into your remote control, or whether what they really want is to be able to pick the call up via their mobile?)

 

He drew an architecture and key to it was the set top box and the ability to interact with the set-top, remote, Video server, and web content. They found several manufacturers which produce set-tops capable of running web browsers  (Comment – see the poster session on Home controllers).

 

When they started to look at implementing services, a first problem was that if the set top is a web browser, you can’t simply push information from the network (incoming call alert) at it.  The solution was something they called a “pushlet”, which was a means to send javascript into the set top to trigger local behavior.  (Comment – I can see the virus writers eyes lighting up all over the world J ) Some implementations allow you to reach the set top from another server that handles access control and other issues.

 

Cross domain scripting was a problem.  The problem was introducing new javascript into an existing web driven client caused addressing problems (I didn’t really understand what the issue here was, seemed to be a javascript addressing problem which was solved by adding a sub-domain on the end of all the addresses in every script, which was still problematic because that could require modifying a lot of scripts.)

 

They had lots of problems related to screen layouts, limits on how the window layout was done and how things like location and color of text were controlled.

 

Infrastructure concerns.  Each set top box needs an open connection to a coordinating server.  This creates an implementation challenge because of limits on sockets in the coordinating server (Comment – yes, this is a real problem for a lot of services where there are LOTS of clients)  Security is also an issue since the coordinating server may be way up in the IMS application server and the messaging has to penetrate firewalls.  (Comment – I don’t quite get this, since IMS is all about setting up secure sessions, and all this really needs is a secure session from an App server down to the set-top)

 

Conclusions:  Lack of standards presents problem.  They had to figure everything out for themselves.  Without standards, every vendor is likely to do this differently and interworking will be difficult.  You need standards that go down to low levels, like handling the open connection for the control and handling of layout in the set-top box.  (Comment – he’s right about the set top.  Having looked a couple of times at “TV Caller ID”, before, the variety of set-top interfaces and what they did was a concern.)

 

Question (Warren Montgomery) – Is there really value in being able to manipulate the call from the set-top given you can’t take an incoming call over the set top?  (Didn’t answer it, instead explained that if you take the call your phone works).

 

Question (Chet McQuaide) – did you look at internet call waiting as a model?  (Answer, yes, but they found all the solutions to Internet call waiting as proprietary and didn’t want to follow them.)  (Comment – I don’t recall how standard some of the ICW solutions were.  Most I am aware of use SIP notification to reach the internet client and/or actually open a SIP session to the client.  There was a lot of attention given to making some kind of standards oriented approach (i.e. using standard open protocols rather than proprietary interfaces).  Internet call waiting has a big advantage in that the client is a smart device with input/output and even in many cases a soft client that can take an incoming phone or multimedia call)

 

Lukasz Luzar (Comarch S.A) The Next Generation TV

 

Goals of NextGen TV include delivering interactive TV to all kinds of devices, fixed and mobile.  Services include broadcast, streaming, “private” TV channels, advertising supported services (with personalized advertising), and web services.

 

He went through many services showing mobility of sessions between devices, among other things.  He talked about their platform (Comarch CSP) as underlying several services and then interacting with Telco and video distribution networks to deliver the services.  (Comment – up to this point at least he didn’t mention IMS.  Not sure why not)

 

Another service was video consulting – basically 3 way call with a video session allowing a “consultant” to see what you want to show him on your video phone (His example was medical, interesting).

 

“Content Annotation” – This basically was a way of adding advertising or other annotations when a movie or show is paused, and the products are relevant to the movie (the example showed a James bond movie with opportunities to buy a watch, a phone, or a gun which appeared in recent scenes). 

 

Cross media adaptation – here he showed the ability to web browse during video, with the example being finding movie locations using Google Maps.  (Comment – I’ve actually wanted to do this when I thought I recognized a location, of course I do this now by pulling up Google earth or some similar application on my laptop, which usually is within reach when I’m watching TV.)

 

Technologies for doing this:  Head end solutions for TV over IP, CAS/DRM systems for content distribution, PVR (DVR) systems for recording, media servers, and adServers.  They are working middleware to link all of this.  Technology on the client side available in the set top includes: HTML, AJAX, on-demand streaming receiving (with RTP/RTSP) and broadcast streaming (with support for DVB-T and DVP-H not sure but I guess this must be control protocols for it)

 

He talked a lot about targeting advertising based on individual characteristics.  One interesting notion is that you will have “virtual network operators” for TV – think of it like unbundling cable or satellite so that you can have a custom package of channels, with fees based on what advertising you are willing to view.  One point he made is that unlike the internet, where users can control where they want to go, with TV, the provider can completely control what you set “We control the horizontal, we control the vertical” J  (Comment – all this kind of stuff sounds a bit too “1984” to me, though as I think I state elsewhere in these notes I think that tailoring advertising to your real interests has the potential to make TV and internet much more enjoyable – no need to mute dozens of commercials for medications for conditions you don’t have and frankly don’t want to see or hear about)

 

Lots of other services in this presentation.  TV banking, on-line betting on TV matches, etc.  (Comment – This presentation has a wealth of service ideas, some interesting, some wacky.  I don’t know that it has much to say on implementation, but it’s worth looking at just for ideas.)

 

He is going to launch a new forum www.ngtvforum.org, to look at standards and directions for next generation TV.

 

Question – have you seen initiatives in Sweden on middleware for content?  Their experience is that content providers and end users are desperate to get the middleman out from in between.  In Sweden they have an agreement to allow content (TV) to be delivered directly to users without bundling on a price per channel basis and network operators are barred from interfering.  (Speaker pointed to some Google TV effort in this area).

 

Question (Chet Mcquaide) – In his childhood most movie theaters were vertically integrated, theater belonged to Fox, Paramount, etc.  Would IPTV allow something like this to appear (i.e. direct distribution from studio to consumer with studio control over everything in-between.)

 

Answer – Estaban – one roadblock is that the end-user device can be a closed platform (i.e. owned and controlled by the cable/IPTV provider) and not allow access.  Luzar – IPTV is the equivalent of a closed internet – you (the provider) decide how it will operate.  (Comment – I’m suspicious whenever I hear things like this.  Nature abhors closed systems.  Look at AOL, “walled gardens” in mobile, Net neutrality, etc.)

UveDenmark has made a deal with the content providers to provide high quality movies over IPTV.  The content providers are anxious to do it because it boosts quality and has the potential to pre-empt piracy.  Content providers are very careful about protecting their content.

 

Session 9 – IMS on the Move.

This session was chaired by Kevin Wollard of BT.

 

Walter Zielinski (Ericpol Telecom) Sip and Web Services: Competing or Complementary.

He is the CTO of the company.  He talked about refactoring taking place over time determining the course of the industry and technology.  He proposed 4 patterns of development of markets – gradual (IN, closed proprietary systems)  Continuous development (Migration to IMS) (Multiple phases of increasing value),  Discontinuous development (big jump, no examples), and hypercompetitive (continuous+discontinuous, which is his view of Web2.0)

 

He described some examples with the use of SIP.  For a fixed operator evolving off IN, he showed a black and white list screening service.  They introduced a Parlay gateway and then an EJB based application server supporting access to SIP and Parlay as the programmer vehicle.  An important distinction he made was that the service provider is the one supplying the application which may not be the telco. Another example was content push (mobile) This was built on an IMS core with a Telco application server of some sort. 

 

Another example was Televoting, using an INAP to SOAP gateway or SIP to INAP gateway and enterprise server to deliver the services.  Question – why use INAP at all?  Sometimes an inferior technology can be selected because of a competitive advantage.  (What I think he was saying from curves in his material is that for an operator today with INAP, it’s an easier and less expensive/risky path to stick a gateway in and reuse it rather than convert the network to SIP, even if  it is clear in the long run that SIP will be a better solution.)

 

He showed a bunch of supply and demand curves for different kinds of technologies and drew some conclusions about which were at appropriate price points and which weren’t.  I had a hard time following all of this.  His conclusion seemed to be that SIP was going to be the winner (versus what I am not sure), because of better accumulation of knowledge and improvement over time. 

 

Reputation and history provides a lot of input into which technology succeeds and will be there at equilibrium.  If SIP delivers and establishes a good reputation, it will be a winner.

 

Question (Kevin Wollard) – what performance issues did you have from the SIP based televoting application?  Answer -- They supported up to160 calls/second without performance problems.

 

Comment from the floor (Sun I think) on televoting – didn’t understand how SIP fit in a Parlay or Parlay X solution.  Answer -- “We don’t believe in Parlay X” they used Parlay (claimed to have 85% of world experience in Parlay (Comment – that’s a big claim for a company I hadn’t heard much about before).  Again restated that he thinks SIP is the winning strategy (Which doesn’t answer the question)

 

Niklaus Blum (Fokus) Prototyping FMC Services on IMS and Web2.0 for O2 (Germany)

 

Focus used to act as a research arm for O2 Germany (until Telefonica bought O2).  Telefonica now has both fixed and mobile services in Germany. The target for them for convergence is IMS.  At the time this was done there was little focus on IPTV or multi-media, but moving existing services to NGN.  They set up a testbed based on IMS as a back end to O2’s network to facilitate prototyping.  In effect they act like an MVNO with a full IMS architecture.

 

The lab uses mostly stuff from their OpenIMS components including a converged SIP/HTTP service execution environment.  They have a directory manager from HP and media servers and gateways from Cantata (recently bought by Dialogics)

 

Their IMS client provides a nice screen display and a bunch of different services.  The client is very important in their view – it’s what determines the user experience.  Very little being done on standardizing there but it’s a critical part of the solution.

 

They didn’t want to just re-implement voice mail using IMS.  Their solution combines Web and audio interfaces.  They used traditional DTMF for the voice interface, but provided a comprehensive IP portal for voice mail.  They can select multiple channels for notification (SMS, email, SIP messaging, etc.) 

 

It’s an IMS based solution, uses the MRF for message storage and announcements.  The app server runs both SIP and HTTP pieces of the solution.  He gave some signaling flows (nothing surprising, they can access it either from circuit or SIP, but the circuit side gets gatewayed up to SIP and handled in SIP, no IM-SSF like function to deal with the call in IN).  He talked a bit about how the flow works and a particular problem in handling the 486 response in SIP – using that flow he described it as “implementing an IN control architecture using IP piece parts”.

 

The media interaction was very simple using a back to back User Agent to control the call.  His claim was this was something that the IETF got right (Comment – it took a while, but this is a very simple model of call control.  Unfortunately it doesn’t give you much guidance on how to use it.)

 

The application server had both Servlet containers and SIP Servlet container (connected using JSIP to the network).

 

His second example was a network address book.  It used “SyncML” to allow address books in devices to be synchronized with the network.  Their server used the XDMS server to store the address book, and they had to create an adaptor to convert the format to the binary formats needed to sync with the devices.

 

They put some services on top of it including the ability to click to conference.  This uses SIP to initiate the conference sessions.  O2 was surprised because their network has no conference facility (Comment -- I suspect what has to happen here is that the conferencing is done in a server in the Fokus IMS lab)

 

He gave another plug for OpenSource IMS as a prototyping vehicle.  IT is available as download and can run on everything including a laptop.  It’s not for commercial use, but the aim is to supply a complete IMS architecture that can be used to test new services and nodes without having to acquire all the pieces.

 

They have just established an “Open SOA Telco” playground, which focuses more on the services layer. 

 

Question (Dave Ludlam) – justify your views on Parlay X.  (Niklas) – they are a lab, they have used it and it worked.  Walter Zelinski – Having been there they found a lot of problems in Parlay.  He feels that Parlay X is basically similar, maybe worse because of performance issues.  In his view Parlay X will have a limited niche in non-real time services.

 

Ulf Olsson (Ericsson) Communication Services – the key to IMS service growth.

 

The IMS community has made some big promises:

  • Multi-vendor, multi-network operator architecture for mass market communication:  Call any time and anywhere and get what you want (standardized services.)
  • An infrastructure where people can differentiate their services to allow operators to attract and retain customers (non-standardized services) This means among other things that you have to be able to get a custom service from your home network while accessing them through another.  (You need to do this to get a big enough customer base, for example, SMS didn’t take off in US until you had interoperability)
  • These are in conflict.

Is IMS the inevitable answer?

  • Right technology (IP)
  • Rich media
  • Can have short time to market.
  • It is a comprehensive architecture with enough standards to work in a multi-vendor multi-operator network.  (There is a reason though why everyone shows a decade old architecture picture, because after release 5 the picture won’t fit on a flat piece of paper.  This scares people off)
  • “There are no other contenders”

 

All of this is irrelevant to the developer:  Service providers used to have two customers: End users and Operators.  Now have a 3rd, developers!  What they want is familiar tools (e.g. Eclipse workbench) and concepts.  They expect to have packages with published APIs that are accessible through tools and easy to understand.  They want to see what it looks like to the end user – the end user experience is a key part of the service.  They want useful, simple abstractions – not “plumbing”.

 

He proposed that what we should be showing developers about IMS is basically client and service network APIs and a big magic box (IMS) in between.  He made a side comment that the server side will have northbound web services interfaces to expose interfaces and that Parlay X is the only real candidate.  Parlay X 2.0 is too limited, but 3.0 is viable.

 

On the device side, JSR281 and other standards apply, on the server side, JSR 116/289 (Sip Servlet).  He claims that JSR289 doesn’t go far enough because it doesn’t really deal with IMS, just SIP. He showed a complete stack of JSRs in the device which include SIP, utilities, and things like access to calendars, presence, and other things.  He showed a simple call flow which doesn’t require much understanding of the signaling by the programmer, but said that we probably need to create even higher level abstractions.

 

He showed the impact of putting multiple network boundaries between the endpoints in a session.  Basically each one has to be standardized enough to allow consistent implementation, but they must be open enough so that each new service doesn’t have to extend the interface.

 

“M is for Multiservice” – We need to make the M in IMS be multiservice, not just multimedia.  Media doesn’t define the service, a device needs to be able to figure out what service is represented by an incoming invite.  Basically he was arguing for SIP messages carrying some kind of application indicator that could be used to allow it to have one SIP stack yet route messages to the right application.  (Comment – this is exactly the concept that caused me to add a “software address” to the DSL signaling we used in an early ISDN prototype almost 30 years ago.  The result eventually translated into multiple layer-2 sessions over the D channel in ISDN)

 

He talked about exposing interfaces to enterprise – doesn’t want to expose the ISC interface but feels that everything interesting can be done with the client interface and it’s simpler.

 

How to be developer friendly – you need to have a testbed that can simulate everything on a desktop “SDS” is their simulation environment (60 days for free, then license).  It has the whole SMS, but most important it also has a simulated Ericsson Symbian phone.  (Comment – As someone who has hacked a lot of code in my past, I believe this is very important.  It’s very difficult to develop without a good simulator that can show you your code running in the environment that the user will see.  There are always surprises in the specs.)

 

A good service becomes a platform for new services!  We typically draw the world as IMS core + everything else.  Instead we should view what’s happening as constantly building up higher level services that become the abstractions at the next level.

 

Question (Remark actually)  I assume Skype is used by 90% of the users, and it’s not IMS.  Comment? Answer – Skype was very successful on the fixed side early on.  They have done calculations on Skype over 3G and it’s very expensive because of charges for packet data compared to normal voice.  “Three” has implemented a Skype lookalike for mobile that looks like Skype to the user and works with Skype but uses normal voice transport.  Actually it’s a sneaky way to avoid inter-carrier delivery charges since the call goes to Skype and Skype takes care of that.  He said eBay hasn’t been all that hot on Skype.  Feels Skype will not be a factor in mobile, but some factor in fixed.

 

Question (Chet McQuaide) – if users are free to buy their own terminals as they are in much of the GSM world how do you get consistency.  (They envision deploying their middleware on any device the user wants during the pre-standards period.  JSR 281 isn’t there yet, but when it is, he expects it to solve the interworking problem.

 

Closing Session

The conference typically concludes with the chairman of the program committee giving a wrap up session which has collected inputs on what was learned from all the sessions.  This year they also let the chair of the keynote session summarize it separately.

Stuart Sharrock (Advisory board chair) on the keynotes

The keynotes 3 presented contrasting viewpoints and came across rather gloomy to many but you don’t have to look at it that way.  One good sign is that we have attracted and interacted with the IETF community, and that the view from those folks isn’t as “hard” on telcos as in the past.  He suggested that the view on network neutrality wasn’t as hard.  (Comment – I didn’t get that impression really.  The internet folks really don’t want the access providers interfering with open IP transport). The future really is in the ability to interoperate with the IP world and telecom world – it isn’t that telecom has no future, it’s that the future is quite different from what we have now, life would be boring if it weren’t.  (Comment – different is good, but not if it’s vastly reduced)

 

Ulrich Reber (TPC Chair)

The conference at a glance

  • 150 attendees
  • 1 keynote session
  • 18 technical sessions
  • 55 presentations 9 posters
  • 1 panel

 

He summarized the key messages of the conference in several categories.  On business models: 

  • Don’t wait for the Internet.  Be proactive and collaborate based on assets to get out of the “valley of depression”.
  • Telcos have a new wholesale opportunity based on providing trusted environments to users.
  • There will be new business models and we have to prioritize what we look at to address them.
  • The answer to technological and business uncertainty is R&D to build re-usable layers to get better and faster response.

 

On new services and applications:

  • Telcos don’t take sufficient notice of end-user requirements – we need to be better at beta trials to get feedback. Comment – yes, and you need to plan to allow some to fail)
  • Subscribers become content providers.  Don’t neglect this
  • Community services drive huge volumes of use
  • Quadruple play with personalized content may become the next killer AP.  Telcos have very valuable subscriber use information that is of use to content providers and broadcasters.

 

Service enablers and Infrastructure

  • Converged billing and charging, identity management and AAA are key pieces
  • Need open service enablers and the ability to support composition (a new service at one layer becomes enabler at the next layer)
  • Still need work on migration of “IN-values” to the next generation service enablers.
  • APIs which make capabilities accessible to Web application developers are key
  • The SOA technology is already being used for network “near solutions”.
  • HSS with centralized data is key for efficient profile management
  • Customizable testing leads to more rapid NGN/IMS service development and employment.

 

The next conference is planned for fall of 2008 in Bordeaux (Comment – there is some uncertainty over this because this is the first time ICIN has scheduled the next conference as close as 12 months after the previous one.  Also there is some concern over how to get attendance up)

Poster Session

This was the second year for a Poster Session at ICIN.  There were 9 presentations, out of 10 accepted, and I believe the material was of very high quality. I didn’t look at all in detail, so here are some impressions of the ones I found most interesting.

Stephen Hall (Nokia/Siemens) Instant Messaging – Evolution to a revolution

This one looked at how IM has evolved from a niche service to a major service category, and how it has become a component of many other services.  It showed use of presence to trigger other services and migration of sessions between IM and other media.

Mauricio Cortes (Alcatel Lucent) Improving Performance of IMS architecture with Intelligent Network Processing

This was an interesting idea of using timestamping, priorities, and delivery deadlines to improve the performance of IMS messaging.  The problem it addresses is that the IMS architecture requires many internal messages to establish a session (someone said about 45), and under heavy load it has the potential to break down when messages get lost or arrive too late.  The scheme here basically identified the messages most likely to still be useful and gave them priority over those which were likely too late and those which were unlikely to result in completed sessions.  A 20% improvement in “on time” message delivery was claimed. (Comment – this is an example of a classic signaling problem in old telecom networks re-appearing in IP networking and the solution is not unlike what SS7 networks have to do under congestion)

Bastien Lamer (Orange) The Home Networking in ETSI/TISPAN

This poster presented work in TISPAN towards standardizing an architecture for a home controller that would be able to control devices in your home connecting to a broadband network via IMS.  The intent is to cover all access (Fiber, cable, DSL, Wireless).  The architecture proposed is basically a “mini IMS”, with the ability to have local control of sessions entirely inside the home, and to work with the communication network IMS to control sessions going in/out.  I asked a bunch of questions about things like whether the home controller has storage for announcements or user media, and he said work on those kinds of issues is still quite early.  (Comment – I have had discussions with people in the cable business in the US about this kind of technology.  Cable operators are all looking to standardize what’s in an intelligent set top box/telephony controller that can support converged services.  Cable Labs has done some work in this area but the providers seem to be ahead of the standards in exploring it.)

Do Van Thanh (Telinor)  Linux for advanced future mobile phones

This paper discussed whether Linux was a suitable base for a mobile phone.  The message is that the “real time” performance of Linux has reached the point where this is feasible, and the memory and processing requirements of Linux are feasible for a mobile device.  (Comment – if you can put windows on one in any form, Linux certainly has to be feasible)

Network Call Log (Orange).

This wasn’t a poster but an actual demonstration by Orange labs of keeping a call log in the network that would capture calls to all devices and integrate with addressing books and other user data.  It provided the ability to review the history and place calls.  They had a live demo that included the ability to capture calls on a mobile phone.  This was an interesting demonstration and showed the value of doing this in the network versus devices.  (Comment – it’s interesting that they felt it necessary to justify this.  Smart phones have created the expectation that everything is in the endpoint, though it is easy to see that a network log has value, especially if you have multiple devices, or your devices aren’t always connected to the network.  If you have some kind of coverage arrangement where your calls appear on the device of an assistant or some kind of simultaneous ring service the value is very clear.)  The demo supplemented a presentation that they gave in one of the sessions.