I attended ICIN 2011 in
The following observations are my own take on the major themes for discussion in the conference:
Stuart outlined the reason for ICIN – not a marketing
conference, not an in depth specialized conference. Stuart stressed the mission of ICIN to get
involved with the local community working in the area. He outlined the nature of this evening’s
session in which TSB, a local organization of innovative companies in
Heinrich started by reminding people of a demonstration in Alexanderplatz in the 1860’s that resulted in fundamental change. He made an analogy that the role of ICIN is in part to facilitate change. He reminded all that DT will sponsor a visit to their facilities on Friday where material from their home networking project will be presented (Comment – unfortunately I will not be able to make that due to travel constraints). He talked a bit about the difficulty of interfacing to startups because they do things differently – the business logic is different. He offered strong support for the TSB session and companies, saying that their presence is vital to the intellectual content of the conference whether or not they are able to provide financial support in the same way as the larger companies.
ICIN is an intellectual Octoberfest (getting together to celebrate something of mutual interest).
He thanked ICIN for inviting their participation for the
second year. Berlin-Brandenburg has
defined 5 clusters for innovation, and “Information and Communication
Technology” is one of them and is also a key part of the others. He cited that over the past 5 years
Roberto started with the conference theme “From Bits to Pipes to Clouds”. What this means to him is that while the telecom industry has traditionally focused on moving lots and lots of information, they want to move to add more value. Should network operators just sell pipes (more than bits, but still just connectivity), or should they move into the clouds? (value added services from a network.) The key issue is how to do this in a way that distinguishes themselves from their competitors. How should they leverage their assets to provide better services?
Roberto talked about how the program related to this theme, from tutorials on the basics (identity management and service infrastructure), to the technical program, to workshops on topics going forward. With that Roberto declared the meeting to be open.
Roberto Minerva outlined the session and introduced the speakers.
Musa is a manager of standards for services in ALU, who worked in the area for 15 years, first for Bell Labs, then ALU. (Musa is someone I worked with occasionally when I worked for Lucent in the late 1990s, but not closely). He is the technical Plenary Chairman of OMA (Open Mobile Alliance) and speaks at ICIN in this capacity. (Note that OMA also has a booth in the poster/demo area for the whole conference)
He talked about the problem of helping carriers leverage their assets in services. He talked about technology and standards, but from a perspective of what’s important to meet needs of the market. In introducing the market he reviewed briefly the market for APIS in the past (Parlay, JAIN, etc.) – nobody took the market by storm. Does it matter any more? His answer is yes.
He gave some numbers on the Application market: $25B in 2015 from MarketsAndMarkets, $35B in 2014 by IDC, $58B in 2014 by Gartner, and $37B in 2015 by Canalys. These numbers are comparable to Gaming, Coffee, and Weight Loss (All 60 billion dollar markets). He noted though that nobody buys pessimistic market numbers.
He noted that 2010 market for applications is $2B-$4B. Very large, but to reach the
$58B number requires doubling every year. (Comment
– these are interesting numbers, but that’s the whole market, not the APIs!)
OMA’s question on this is: How do you double every year? Is it sustainable? Their answer is no because of fragmentation, and standardization is an answer. He gave an example from Programmableweb.com (a web site which posts APIs), and noted that whatever keyword you put in (like location), you get lots of APIs. (>100). (Comment – yes, but I’ve looked at this site and there’s a lot of stuff in there that is only vaguely related to whatever concept you put in. It’s not a very good search engine to tell you the most important or used APIs) One could ask why you need standards if we already have so many APIs, but that in fact answers the question itself. (Yes, that’s true provided the fragmentation is real, and provided there’s a willingness to implement a standard. I don’t believe the latter has actually been demonstrated) To demonstrate scaling problems he drew the traditional NxM kind of connectivity that results with each operator having a unique API. Standards can simplify this problem. (Comment – of course, but actually having this work has yet to be demonstrated)
What OMA does – recognize the assets that exist in the network and provide APIs that will open them up in a secure and manageable way. The goal is that the application community can now get to an asset independent of what operator is exposing it. (Comment – yes, but we have heard this before. For it to work, the API has to be easy enough to implement and the operators and others involved have to have sufficient incentive to do it. I think the model also doesn’t really address the problem of substitution – can the developer build the service without dealing with the fragmented world of network APIs at all? Even if network APIs are standardized will the cost and/or complexity of using the APIs be more than the cost and complexity of a work around?)
He gave a list of APIs in 3 formats – Abstract, REST, and SOAP based, which basically are different ways of presenting the interfaces. They are working on a growing list of categories.
OMA works in a complicated landscape with a lot of different agencies and other players with stakes in this industry. OMA works by getting requirements from industry partners, including standards organizations (WAC, GSM OneAPI, GSMA requirements from wireless operators, etc.). The approach is intended to be complimentary, allowing the other agencies to focus on the needs of their constituencies (e.g. payments and settlements for GSMA), while the OMA focuses on defining interfaces that meet everyone’s requirements. (Comment – all very good in theory, but in practice what often happens is that the resulting interfaces become quite complex in order to handle the needs of a very broad set of players.)
Question”
Paul Crane (BT) – He used to be on the Parlay board – what’s the
basis of those analysts reports, and what goes to the providers of APIs? (Musa didn’t have a breakdown) (Comment
– I did such a projection on the application market for Insight that did break
it down. The view I had was not nearly
that large, but it’s not negligible. The
real thing providers of APIs need to figure out is where they have assets with
unique value and can charge for them, and where the API doesn’t have enough
value to collect much revenue from the application builder, but by providing
it, maybe even for free, they break the ice with the application builder and
make it easier for them to adopt the APIs that they have to pay for. A key aspect of this is that much of what we
used to think was valuable (e.g. call control) has much less value in an IP
world, so our intuition from the past is flawed.)
He talked about the growth in satellite direct to home
broadcasting over the past decade. (He
gave a map of the world which gave different standards in use. The DVB standard is in use in Europe and
Africa, North America has a different system as does Japan, and
Why is this important? Tablet PCs and smartphones – 66% of traffic on networks will be video by 2015. DVB is very efficient 8.2 bits/Hz. (Comment – WOW)
The growth of video is driving network operators to install more cells (so that each one covers a smaller area and the bandwidth can be re-used more often) and make requests for more spectrum. Even if they can do this it means more energy use and expense.
His answer is we need to cooperate between broadcast and mobile industries.
Trends:
Consequence for broadcasting: More variety and less value to immediate live media. The unique value of Terrestrial networks is delivering video to portable and mobile devices.
Consequences for Cellular:
Swamped with video, the cost of the network rises. Will people be willing to pay? Demand for spectrum will grow. Carbon footprint will grow. “Flat rate” will disappear (it has in
What’s the concept for a hybrid broadcast/broadband? -- Use broadcast to deliver the most popular content and deliver the long tail of content over broadband and save the cost of broadcast. By being able to store and buffer means you can make use of cheaper broadband to deliver for later viewing. Multiplexors that put all the channels available into one stream in real time will go away, and instead be dynamic (only the most popular streams will be sent).
The TV interacts through the network with logic that decides
how the content desired will be delivered and configures the appropriate
equipment to do so most cost effectively.
He talked about integrating use of “white spaces” in the spectrum through
this – the owner manages who can use it and when and allocates it. (Comment
– interesting, this is a different view from what those looking at the white
spaces in the
Conclusion – standards for digital broadcast have reached the maximum available efficiency, we need something else to continue to support an ever rising demand for content. Dynamic broadcast is a way of exploiting the same spectrum in new ways to continue to improve efficiency.
He outlined SAP’s research effort and what they do. Then he talked about trends –
The application market is shifting – more applications, each with fewer customers. This means more customization, but requires more efficiency in building them to be cost effective.
Why don’t more businesses move to the cloud? One answer is reluctance based on solving some key business problems – getting the data into the cloud, mobility, protection, etc.
His answer is the business web – a cloud based business environment that gets access to key resources and optimizes getting businesses together. Four components:
They are working on the technology to support this. They see lots of potential users including businesses, e-government, and consumers (more standard access to business services with perhaps less redundancy required). They do applied research and he gave as an example some pilots where they have worked with businesses on specific problems, mostly dealing with retail and logistics. They have two broader circles of work planned further out in such areas as smart phone only based business, public safety, connected cars, mobile workforce, etc.
They are working on some research areas:
Question: Chet McQuaide – he was glad to see user interfaces and interoperability as key challenges. He looked back at the big proliferation of software for spreadsheets, graphics, scheduling, etc., and noted that Microsoft succeeded not because they were better but because their products interoperated, and asked how this was being addressed. (Comment – In fact though, Microsoft’s products were not interoperable at first. Lotus had the much better integration of their suite. Even after Microsoft had common look and feel there were noticeable disconnects among how different applications worked for a long time (e.g. the inability to display multiple excel windows as you could for other MS apps)) Answer – it’s important, and we are working on it, but we don’t expect to force everyone into one look and feel.
Question (?) -- We have seen this before under the banner of “Network Centric Services”. Does this fit into efforts to standardize network services? Answer – absolutely. They aren’t the only ones with the idea.
Joe is an executive with HP who has been much involved in strategy for HP in IT. (During the break we spoke briefly and he described his break neck travel schedule to address audiences on 3 continents in less than a week. It is unfortunate he could not attend more of the conference.)
He created an approach to understanding the economics of Cloud Computing, and has written a book on the topic that will be coming out in early 2012. His website (JoeWeinman.com) has some key chapters available for download.
Before working for HP he worked for AT&T for about 30 years. He outlined several areas for the cloud:
He asked who the largest Telco in the world was, and volunteered Microsoft as the answer based on their acquisition of Skype. He said Facebook and Google would soon be as big or bigger. This presents a major challenge to the traditional operators in competing. (Comment – yes, but they aren’t comparable. The IP/IT companies do not provide the underlying access network. What we are seeing is a restructuring that if not checked by regulation will separate services from access and networking)
He cited regulation as a key part of the environment saying it can either facilitate changes like dynamic broadcasting or hinder it.
He went through a number of ways to use cloud based solutions to address the key challenges for service providers – basically anything done in software is a candidate for cloud based application. “It’s nice to move application silos to the cloud, but you don’t want to move to cloud silos” We need standards for architecture, services and components in the cloud to prevent this from happening.
He cited a common definition: Cloud = Service accessed by browser over the internet (Yankee, Gartner, Wikipedia). This is misleading and wrong. Services are okay, provided the term includes content, data, etc. Access by browser is less than 5% of the real use – the definition must include access from mobile, applications, and voice and video endpoints. “Over the Internet” is too limiting. This must include mobile, “Extranet”, Intranet, etc.
All other things being equal:
(All this comes from his book)
The key phrase here is “all things being equal”. What’s a hybrid? Anything from pure cloud (e.g. Salesforce.com) to everything in house. He talked about one strategy – “Mixed rate hosting cloud” – all the capacity is in the service provider, but some is dedicated to the particular enterprise, and some is dynamically shared. Pricing of the services depends on the degree of dedication (Comment – yes, this sounds a bit like using a mix of private network and virtual private network in meeting your communication bandwidth needs)
Another hybrid strategy (“Cloudburst”) has dedicated resources in the enterprise data center but taps resources in the cloud when capacity is needed. This requires that the cloud and the enterprise run the same applications and have portability.
Front End/Back End is another hybrid approach. He thought this one had some interesting characteristics for optimality.
The Cloudburst architecture requires thinking about where the data is – if it all comes from the users (so the application is in effect stateless), it’s easy – just crank up the cloud and move the user there when capacity is needed. If the application relies on lots of data, you have to decide where the data goes and how it’s accessed from both the cloud and the local applications. “The Internet probably isn’t up to the task”. Another approach is actually to migrate the data to where the application that serves the user is – This is a problem of speed and capacity. He cited a rule given by Amazon “If it’s a small amount of data, send it over the internet, otherwise use Fedex (or DHL!)” (Hmm, we used to call this sneakernet). He talked about how to do this within standards for optical networks – dedicate enormous bandwidth dynamically to the job of rapidly moving the data. (Comment – you may be able to allocate the bandwidth, but isn’t the real problem getting all that data out of the database on one side and into another one?)
Yet another approach is simply to replicate the data over the network full time to the cloud. You do this just for robustness and backup, and “cloudbursting” comes along for free. His observation was there are good standards for this, but it doesn’t work with transaction intensive applications because the time delay for synchronous data mirroring is just too long (Comment – wow, I worked on this stuff in another lifetime 35 years ago. The technology has changed the units in talking about problems of distributed databases, but the fundamental problems of consistency, performance, and robustness haven’t gone away)
He began showing the explosive growth of broadband
networking in
He gave goals for their standards work aimed at increasing energy efficiency and in general reducing the impact of networking while maximizing the value extracted from networks.
From this he came up with 5 future targets:
He talked about the Akari architecture design project, which focused on a “new start” network design to address these needs over the next 2-3 decades. The target is to have something that can be introduced by 2015. The approach maps the 5 targets to a set of requirements in a variety of areas (very complex mapping), and the result is they are designing to meet those requirements now.
He talked about a couple of specifics including:
· A solution for mobility that used a hierarchical location service to make mobility more scalable.
· A high availability solution involving replication of routers.
· Interworking of packet and Circuit
· Etc.
(The talk presented a lot of technical detail on their architecture and how it works. I have the slides which I will review off line, but I think this was too much to absorb for most people late in the day.)
Questions to the Panel:
From Roberto Minerva: What kinds of services will be deployed and how will they be accessed? (Musa) – the quest for the killer service is over, they don’t exist. OMA really has to support easy exposure of any service and use by anyone (Comment – yes, but that’s how we wound up with >100 APIs for each “service”!)
Another answer (Reimers) – Media and social communication is where the growth is.
(Joe Weinman) – Another way to put this is to ask what should the telco be exposing – what’s valuable? He feels Telco has been barking up the wrong tree with Presence and Location – Open GPS in handsets already provides this. What should Telco be doing: Dymanic bandwidth and QoS with variable usage-sensitive pricing. This has key value for people. E-Readers can make use of mobile resources if we offered the capability to use the network dynamically. There are similar opportunities for processing, storage, and other resources.
He said what creates value is “Decomoditization”
– Coach takes 1 Euro of cow skin and turns it into a 2,000 Euro handbag. It’s easy with the right things exposed. (Comment – I’m not sure I understand
this. The success of Coach is more about
the power of brands than access to the raw materials.)
Salesforce.com didn’t re-invent customer relationship management – these companies enabled business transformation.
Question (Phillip Kelly,
I gave an overview of the poster/demo session using material I got from the 7 authors. The session occurs over Wednesday/Thursday, with an evening reception on Wednesday. This year every presentation had some kind of multi-media presentation, and most had some kind of demonstration. Much of this was focused on services. The specifics of the 7 demos are covered in a later section of this report devoted to the session
This was a very interesting session with a lot of brief
talks. The material ranged from
interesting telecom services to standards to companies that provided services,
like software quality certifications, of interest to the telecom world. (Comment. One interesting thing was the degree of
informality of the presentations. Most
presenters identified themselves and their companies only at the end of the
talk, and nobody gave much marketing.
Most speakers took much less than allocated time. I think this is a reflection of the
difference in culture between the traditional telecom companies and the new
world of Internet and IT technologies. Many of these talks were like the
“elevator talks” commonly given by startups in the .com era looking for
partners and funding)
The first presenter from a company called 42Com
Telecommunications presented a service “Fonic Call
Home. Fonic is
a provider of mobile phone service (prepaid?)
What this service does is enable a user travelling outside of
The company is less than 1km from the Park Inn Berlin, and
one of their major customers is the TV tower which is very near the hotel. The company took over special mobile services
(e.g. paging) from France Telecom, Vodafone, and DT who didn’t want to operate
these essentially niche services. They
have country wide networks in
They demonstrated a small endpoint costing less than 3 Euro aimed
at applications for energy control and monitoring as well as alarms. (I think he said they have 2 Million units
deployed in
The concept is to separate control of a router from the execution of packet processing via an open interface, which makes it easier to build applications.
OpenFlow is a silicon valley driven standard, there are early implementations. It’s not Carrier grade and needs extensions for Optics, wireless, IPv6, resilience, redundancy, etc.
OFELIA – an EU project to create a pan-european testbed of OpenFlow solutions (OpenFlow enabled routers interconnected). There will be a process to make proposals for funding to do interesting things on it.)
SPARC – This is a project aimed at adoption of OpenFlow in carrier networks. They will be demonstrating the use of OpenFlow in a network.
He showed a 2 minute video showing the application, which allows content to be exchanged between smart phones by way of gestures without Bluetooth. (Basically one user can “throw” the content off phone by flicking it like a Frisbee, then another catches the content with his/her phone. You can also move by dragging from one to another. They have an application in the Apple store that actually does this. Transfer requires both users to gesture, one to send and one to receive. This avoids security problems with short range networking like Bluetooth. Everything happens through mobile internet and up into the cloud. They support both iPhone and Android, but the Android version is more flexible.
Another product – the wall. http://wall.hoccer.com This enables a user to catch content with a laptop. Using this and the mobile app, you can move content either way between mobile phones and laptops. They have 800K downloads to date of their mobile applications, with no marketing.
You can transfer money this way (via paypal or German savings and loans).
(Comment – this is
very simple and probably intuitive to smart phone users, but would seem to
avoid a lot of questions like security, billing, QoS,
and other things carriers would be worrying about.)
iSQI is a
leading provider of certification exams for software quality worldwide. (60 countries and 6 languages) They certify a variety of standards. ¾ of DAX30 companies (
They are both helping companies create strategies for cloud computing and helping them implement those strategies. They are about 50 people.
What they are talking about here is a project they are doing
in
The company is focused on replacing the DVD as a distribution for games. This makes casual use much easier – point your browser somewhere, click, and play, and get an experience as good as you have with custom hardware and a physical DVD. Their key design decisions were to make the game run on the client device (browser/computer), and stream game content to the device as needed. The system uses a browser integration framework that lets the game work with the browser, a streaming framework to get the content to the endpoint, and a compression framework which expands the streamed content to what’s needed.
The Game is typically written in C++, so a key problem is how to get it into a browser. They can use a Java Applet, browser plugin or a new Google Chrome extension that allows native code. (Comment – I’ll bet this sets off all kinds of security warnings!)
The Streaming framework handles getting at the content data needed to support the game with the objective of keeping ahead of what the game needs. The game is unaware of whether the content is local or being downloaded
The Compression framework saves bandwidth and is done with a partner who works on algorithms. There are different versions – lossless and lossy. They use standards (e.g. MP3) where available, but nothing yet standard for 3D.
(Comment – this all seems straightforward enough but I do kind of wonder about the implications for the user. Serious game players opt for custom hardware to make the game run faster or better, and buying a DVD and installing it is a pretty low barrier compared to that. I would also wonder how much bandwidth this is likely to use and how many people without data will be rudely surprised by how much they spent streaming content for a game.)
What this company does is test usability of websites. He showed a picture of the founder of Zynga, a huge player in internet gaming, who said that his biggest early mistake was not testing enough.
The ideal “learning loop” for improving web based services uses quick qualitative testing to get feedback on how well it works and then implement improvements, then does quantitative testing of the service to refine it.
5 users are sufficient – 5 users can find 85% of the problems in the service. What their company does is provide the user panel and help the customer with a website to test develop a script for how the users will use the site, then they provide the test users and within 24 hours they will give you feedback. They will give you comments, video of how the system was used, logs of where the mouse went, etc. Each test user costs 49 Euros, and every 7th user is free!. They offer a money back guarantee (if you don’t feel the service gives you at least that much value it’s free).
Most of their staff has founded companies. (i.e. their test users are in fact very sophisticated technology customers) He built the testing business because he needed the test capability for something he was building and felt it would be useful for others and supportable as an independent business.
The company makes miniature computers (on the scale of a matchbox). The objective is really to support “smart sensors that support “smart grid”, smart home, etc, and other applications that require radio connected computing. Another application is to follow goods through the supply chain.
Their very small modules (27x19mm) have up to 72Mhz computing, 8Mbit flash, and are easy to connect and
integrate with a variety of interfaces. The devices are low power and long lasting, they implement a virtual machine environment and
support Java 1.6. Software development
is via conventional development tools (no experts needed). (Comment
– what he really said that because they don’t need C++ you don’t need experts –
I’m not sure I buy that entirely as one who has built in C++ and Java)
Some applications –
The company was founded in 2010 as a spin off of TU Berlin.
Following the session there was additional time. The session chair described his own business idea which was to use magnetic sensors in mobile device to sense the position of magnetic objects in the environment including on user’s hands. This would allow gesture based interfaces to be supported without any cumbersome sensors, and offer the potential for interfaces to be supported in a lot of places where computer use is now difficult, such as underwater, at the beach, or in “dirty” environments. He felt this had revolutionary potential.
I missed the beginning of this session because I had to resolve some problems in the setup for the poster session. I came while she was describing distributed algorithms to control a conference.
The basic structure avoids the use of any centralized components that would appear in a standard IMS architecture. It does make use of a WIMAX base station for such operations as sending out heartbeat messages that the conference members use for coordination. Procedures were described for handover of the base station role between base stations.
Question: Is the control role done by the base station here normal? (i.e. will base stations be able to do this?) Answer – yes, the 4G architecture puts more intelligence into the base station (controller plus relay role), so it is reasonable to assume that it can play this role. (Comment – yes, but I think the real question is whether one can expect base stations to implement this as standard)
Question: For handover of clients between base stations does the client need two radios to be in communication with both networks at the same time? Answer – two kinds of handover – link level hand over of the radio channel happens as usual, but the control layer is an overlay. For this to work, the base stations have to communicate to forward messages from the client during handover. (Not positive I got this completely right because I missed this part of the talk)
Question (Rebecca Copeland) – most conferences include both fixed and mobile. How is this handled? Answer – yes, they handle both. (The real question seemed to be about whether the role played by the base station would cover the fixed endpoints as well.) Another aspect of this was how to keep track of who is in the conference and not as participants move in and out of range.
He talked about how to move service programs in the network nearer the user. This gives benefits:
He gave a couple of use cases, including one which augmented video adding tourist information. The processing of the video would be moved to where it can be close to the user.
To implement this he talked about breaking a service into components, determining how to decide where components would be optimally placed, then implementing a dynamic migration algorithm to move service components to the right place. The solution uses 3 pieces:
(The description here seemed to assume that service components were shared, and that if a component was migrated, all the existing users would have to be connected to the new location rather than leaving them running on the existing one).
The placement controller coordinates based on timing and
network state and technology. It can
have prediction (i.e. predict that a large load for a particular service will
build up in a particular place), etc. (Comment – in a way this sounds a lot like
what used to be done in the long distance telephone network in the
Redirection is done in a way analogous to DNS resolution or traditional load distribution.
Communication between operator networks is needed when this happens between networks. He showed an architecture of the solution components and made the observation that not all the pieces need to be in every network.
They are setting up a laboratory to test the architecture. This is part of a long term work plan for NTT DOCOMO on the next generation mobile network, which virtualizes the network (i.e. makes the key components and roles independent of physical location making it easier to share components between operators.
Question (?) – How can operators differentiate themselves in this model? His answer was that service mobility would be a differentiator (the questioner pointed out that when everyone implements mobility there’s no differentiation). Apparently the view is that being able to do this will be an early advantage. (Comment – this sounds a lot like CAMEL from the IN days, which prompted others to observe the same and ask about whether there would be a motivation for this, given the slow deployment of CAMEL)
Question (?) – 12-15 years ago we had this discussion about IN, the difference here seems to be the use of the cloud, vs operator-specific platforms (SCPs). Answer – yes, plus the fact that we now have services that are much more resource intensive (media). (Comment – yes, but trunking telephone users to a centrally implemented service was very expensive, which was a big part of the motivation for IN.)
Question (Chet McQuaide) – what are the implications for operations support? Answer – he hasn’t addressed that yet, they are aware of the need, but moving the service is hard enough.
This talk was focused on Mobile Virtual Network
Operators. One observation is that MVNOs are not well represented in standards. MVNOs are often
imposed on the operators with networks by regulators, which
means they are not represented in the standards structure. Nevertheless the MVNO market is growing (ITU
says they double in 4 years) (Comment – I
don’t know how fast that really is given the growth of the industry) MVNO
growth has been particularly high in the
Ten MVNO growing pains:
New technology standards are helping enable MVNOs because they better define separation of layers. This allows a business to interface only at one layer and operate only at one layer (i.e. unbundling the need for every operator to provide a complete Silo)
She showed an architecture with Access, Transport, Session, and Application. Access and Transport are now well separated with clean interfaces, but Session and Application are still more closely coupled than completely desirable. She showed various entities in the access and transport layer and talked about how it is now possible for a company to specialize in providing WIMAX access without having to provide transport, session control, or services.
Service Providers want to move up the value chain (from access upwards). MVNOs want to move down, starting as application focused they want to extend into the lower layers when they want to.
She talked about both thin MVNOs and full MVNOs. A full MVNO implements all the session control pieces in the IMS architecture (i.e. it has an IMS core). In LTE, more control and policy can be moved into the MVNO this way (i.e. the interfaces allow more of this to be separated from the serving MNO.
For Policy to operate properly, the MVNO must be able to transfer the policy to the MNO which will actually enforce the policy (i.e. you do not want to transport everything to the MVNOs network to enforce policy there). They key insight is that MVNOs look like roaming – every user of the MVNO is essentially always roaming. The architectural solutions for roaming can serve this need.
It is also possible to have transport route through the MVNO, which allows the MVNO to maintain context and control policy (But may incur additional transport costs).
The process to gain access from trusted and untrusted networks designed for the MNO will work equally well to gain access to the MVNO network. Untrusted networks (e.g. WiFi) interfaces through an enhanced packet data gateway which handles non-IMS protocols and enforces policy.
Her last slide (and the paper) show how there are 10 solutions to the 10 issues she raised at the beginning. Basically the new standards (EPS) provide solutions to many of these problems. Most of the solutions come from things done for MNOs or to support roaming, but the solutions work equally well to support MVNO operations (because MVNOs look like roaming).
Question (Wolfgang) – this related very much to his presentation, he asked about whether she had considered Network Virtualization in looking for solutions. Her answer was not really direct. Instead she talked about how the trend is really for all network operators to move towards an architecture where all their service and session layer is “virtualized”, essentially operating as an MVNO on their own access and transport.
The session addresses the challenges of using networks for communication machine to machine instead of with humans.
He gave an example which used sensors to implement social energy management. Sensors provide input on energy usage to a cloud application that then provides feedback to users who use it to adjust their own usage (I guess the “social” part is that social pressure encourages users to lower consumption).
The key issues are:
· scalability (there is a large amount of traffic coming to the sensors)
· The need for low latency (have to do processing quickly so feedback is meaningful),
· and avoiding resource waste (e.g. need to allocate bandwidth for peak use, but most of the time it’s not needed.
These problems relate to the implementation of distributed applications (e.g. stream data delivery, WAN optimization, etc.) Most of these are specialized solutions not suitable for more general M2M. Work on Service portability via distributed service delivery platform functions is also applicable, but not sufficient (not oriented towards large volumes).
The proposed architecture is AAN (Application Assist Network), which uses 4 layers (application, network processing, virtual network, physical network). The innovation seems to be mainly in the Network Processing layer, which takes input from the others and configures the processing and caching of sensor data for delivery to the application.
As an example, if the application wants caching, the network controller decides where it will be done.
To talk about his example he had to give some background on
the Japanese lifestyle. Electronic money
cards are common – they can be used to buy train tickets, in vending machines,
etc. (Comment – I guess these are like disposable debit cards)
One application was advertising in train stations. What they want to do is basically bring up ads on vending machines relevant to the user when the user approaches the machine. If they wait until the user gets there it takes too long. If they detect the user entering the station at the gate, they then download advertisements to a local cache, which is then used to serve the correct advertising dynamically.
He went through an example of how the network is virtualized to support this (unfortunately there was some interruption of the slides that made this hard to follow).
They compared a commonly used cache management algorithm
(Squid?) with their structure of pro-active caching. Their pro-active caching resulted in higher
cache hit rates (much higher for rarely accessed data) provided the cache is
large enough (60Mb – I don’t know what’s magic about that size). (Comment
– he didn’t really talk much about this but one thing that was interesting was
that the proposed solution performed worse on smaller cache sizes, though not
much worse in most cases.)
The future work is in improving the cache controller and in looking at other network functions to study. (Comment – this work was much more narrowly focused on the problem of cache management than the title would suggest, unfortunately)
Question – what’s the network (not really answered other than it’s a high speed network.)
Question – does this actually exist? No, personalized vending machines aren’t out there yet. He did though say they have vending machines with intelligence and cameras that can figure out who is in front of a machine and pick an ad to display. (Comment – I’ve read about this – they actually try to figure out the demographics of the person standing there. This sounds “creepy” to me, but I think this is largely cultural – some cultures are more comfortable with automated personalization)
Question – why not just put enough memory in the vending machine to hold all the video? (He didn’t answer this but talked a bit about the machines. It may be a capacity issue or just the problem of keeping them up to date.)
M2M includes not just the machines as endpoints, but machines inside the network which must understand the communications need. We already have architectures for NGN (3GPP, OMA, IMS,, etc.) His observation is that IMS isn’t suitable for M2M.
He showed a picture that Fokus has been showing for some time, showing a decline of the Telco world which includes IMS and even the Extended Packet Core (EPC), with a growth of internet/cloud based solutions. Networks are carrying a greatly increasing breadth of traffic.
The EPC is an all packet solution that incorporates multiple forms of access and easily integrates with the internet (Comment – yes, but it’s much more complex than typical internet architectures) He showed an architecture for it and all the problems it solves, but made the comment that it is complicated, synchronous, and session oriented.) The trouble is that M2M isn’t session oriented. The real issue is that all the work on QoS that has been done in the context of IMS focuses on sessions, but without sessions, just peer-to-peer messaging, it doesn’t apply easily.
Their proposal is in fact to keep the existing session based structure but add an application based QoS management architecture (As I read the charts, it maintains some kind of context that will decide what the message flows of an application are and interfaces that to the IMS QoS management structure.)
He went through how this could be done and how it might map into existing API structures for QoS control. He talked about a key problem being the number of messages, as the number of Machine endpoints gets large.
Fokus does prototypes and testbeds for many of these concepts and he talked about how to incorporate M2M QoS
Question (Kris Kimbler) – “You said OneAPI exposed everything in the network, is that right?” Answer – admits it’s based on just reading standards without hands on knowledge. Followup – somehow you have to expose more, specifically policies, are you doing this in the Fokus “playground” – Answer – yes, but it’s limited.
Question (Bernard Vilian) – you have two mechanisms for QoS, Session and Application oriented, where is the boundary and is it clear? Answer – right now the interfaces are horizontal – everything must be mapped into the session based interfaces implemented by the core network. As EPC rolls out he expects new forms of control to be available and perhaps a core that supports two parallel mechanisms.
The Internet of Things means that everything has the potential to communicate, and “things” have their own personalities, access services, etc. Current Problems:
Their work is under the banner ISIS (Infrastructure Service for Integrated Solutions.) Telenor is the main coordinator, but they have partners in universities and small companies.
The basic unit of
There is a whole toolbox of things for doing this: ARCTIS – tool for analysis and developing
applications. COOS – operating system
which handles the message routing and interconnection. ISIS Store – app store for
They have looked at all this using Smartphones as a model. These pieces are built by different players.
She showed ARCTIS, which looks like a basic building block composition based tool for composing applications. There is an analyzer that will look for design problems, then a compiler which compiles for a given platform and packages it up for delivery.
She also talked about a composition engine that is end-user oriented which allows a user to connect building blocks (i.e. dynamically create something like “when the laundry is done, notify me and turn the TV off”). The user tool is called “Puzzle”. (Comment – I’m not really sure how this differs from ARCTIS).
The lifecycle here is that an application builder uses ARCTIS to produce some kind of application element and then makes it available in the store. What ARCTIS produces is “Edges”. These can be obtained by end-users, who download them and install them in their residential gateway. They then use Puzzle to put those edges together to do what they want. (Comment – this is interesting, it seems to presume a very standardized world for connecting things centered on a home controller. I’ve seen many systems that posit a standard home controller, but haven’t seen this really deployed)
What they have is basically a concept demonstration (they actually have a laboratory/testbed where you can see it), that doesn’t yet address things like billing. The belief is that this may not really be a direct revenue source, but instead an enabler for interesting applications which drive network usage.
Question (Claudio Venezia): In your demo room can the users access “things” outside of the room through the internet? Answer – yes, we think so, but the device to be accessed has to be registered through COOS. Claudio said what he was really interested in was whether they had a way around the limitations of UPNP. (Comment – the basic message is that you have to “import” the device into the environment but it can be anywhere. The real key is how hard this is I guess.)
Question (?) – does this target skilled users or unskilled (they want to target unskilled).
Question (Igor Faynberg) – explain more about COOS – she can’t answer that, it’s an independent project. There are sources of information on it. Igor’s interest is whether it’s standards based.
Question (?) – have you done usability tests on Puzzle? Can unskilled users really do it? Yes they have. She actually brought up the site of IFFT.com, which has a similar interface for end-user programmability. She said they tested several interfaces.
Question (?) -- have you looked at business models for this. They have but are struggling with this. Telenor’s concept was to make the platform available for free but actually use it to build vertical applications which they would sell at a profit (e.g. healthcare).
Question (Kris Kimbler) – it seems that composition is very simplistic, doesn’t it need to evolve to full orchestration? Yes, but they haven’t done it yet.
(General comment –
this is a very interesting paper/project, but it’s more about service creation
than about any of the underlying problems (billing, security, transport
efficiency, performance, etc.)
General Question to all: If you look at the volume of traffic in our networks, what’s the estimate for volume that M2M will create? Are there issues here? (Comment – to me the interesting thing is that M2M is likely to exert stress in a completely different dimension from the growth of video. Video is session oriented, but M2M is not and will stress routing and control much harder than video.) The answer from one of the speakers was 96% growth per year, but it’s nothing compared to the overall mobile traffic.
She introduced the session by talking about the era we live in – attention is a key resource. She had a big listing of popular web sites (Google and Facebook were two). Social networking and social media attract people, but now they are seen as an opportunity for advertisers to get attention. This raises the level of privacy concerns inherent in these kinds of services.
The concept here is linking digital signs with mobile phones using IVR to interact with a user and coupling it to interactively updated signs. He played a video illustrating the concept. Two women are looking for a place to use have lunch and see a restaurant displaying a digital sign. It has a place to touch your mobile phone which allows the system to recognize the phone and then they can interact through an IVR (or presumably browser), and get display back on the sign. They make a reservation, then talk to an operator to get a window seat, where the operator gets the information from the system to make it work.
Another showed a person casting a vote for a Japanese “idol”
show using a phone number on a big street display, and getting back interactive
displays on the sign showing the votes and a short video clip from their
selection. (Comment – interesting, not at all what I thought this was about from
the abstract. This isn’t really very
radical as I understand it. It’s not a
whole lot different from “click to dial” kinds of internet services.) Believe
it or not, I worked with someone on a service like this in 1978, demonstrated
over a prototype ISDN line, with the display being just a home video terminal.
The architecture uses a device linker that connects the
phones to the signs. The signs are basic
content delivery systems capable of interactively updating what is
displayed. The linker supports 4 kinds
of linkages (1-1, many to many, 1-many, many-1) (I think this just controlled whose input went to the sign application
and where the results come from).
There is a content synchronizer, which associated device IDs with the signs and users and works through lower level managers which drive the sign and interact with the user(s).
The service is scenario driven, with scenarios describing
lists of presentations and responses. (Comment - yup, primitive hypertext, that’s
how we did it in 1978 too J)
The prototype uses phone numbers (i.e. the user calls a phone number on the screen, which associates the phone to the sign.) In fact the “touch phone here” panel showed in the video simply is a way of automatically transferring the phone number (Bluetooth?) and placing the call automatically. The IVR dialogs are driven from VoiceXML.
He talked about different usage cases, one being advertising, another being information gathering (both 1-1)
For many-1 he cited on-street televoting or gaming. Also instant and real-time polling. (Comment – I don’t know how they coordinate multiple users. This also seems to be another service that depends on local culture. He talked about this for use in schools, to check understanding. Having your school work put up and corrected on the board is painful in some cultures, but that seems to be what he was talking about.)
This is a lab prototype, they will be doing field tests.
Question (Igor Faynberg) – have you looked at engineering of this (server capacity?) Answer – their system has only been done for exhibitions. They have mainly been looking at user reactions. They don’t have a current plan to evaluate performance for the prototype.
Question (?) – Have you done usability studies for this? What is the end-user reaction?
Answer – there are similar use cases already using other technology. (His response on usability wasn’t clear).
Question (?) -- what’s the connectivity between the mobile phone and the screen? Are there any specific device requirements for this to be used? The prototype uses a contactless ID card system. Japanese mobile phones have a near field communication capability that is used to pass the information (What I called Bluetooth above, otherwise it does work as I speculated)
(Comment – the speaker has published over 200 articles, and several books in telecom and done training for a lot of telecom areas, but I haven’t encountered him probably because of where he comes from)
What’s a check in? It’s a presence status associated with a location. Check-ins award the user with points and badges. (The model described was basically what I have heard for “foursquare” and some other social sites.) (Comment – I’ve read a lot about these systems and the technology that supports them, but frankly don’t understand the appeal)
The traditional approach uses one check-in per location or object (event). What they want to do is allow businesses to define their own rules for check-in, and for what the user gets as a result. They want to avoid using an intermediate service to handle this (presumably to avoid paying them either with money or by exposing the data of the application to them). They want to base the application on smart phones.
The application described was “QRpon” – use your mobile phone to check in and get a confirmation of what your benefit is in return. They use a set of rules to determine what the user gets back based on their social networking connectivity and check in events.
Another was “Pizza buzz” – the user gets an offer to get a second pizza at half price. If they accept, they are given a code to get the pizza.
He gave a summary on the implementation. They are based on Facebook, and that’s the social connectivity they get. It uses only “facebook connect”, and doesn’t use anything password based.
In comparison to Foursquare they are not tied to locations – you can check in to a product, an event, or something else.
(Comment – the bottom line seems to be a competitor to Foursquare with some differences in implementation and the only real difference for the user is the ability to check in to things that aren’t location based. They are apparently trying to sell Facebook on adopting this solution as part of their core.)
Question (Claudio Venezia, TI): He was asking about the details of implementation in HTML 5, about how they get access to peripherals of the smart phone. The speaker didn’t know the prototype in enough detail. Currently they don’t do much with the smartphone.
Question(?) – you avoid Foursquare, but force the business to deal with Facebook, why do that? Answer – it was easier to work with Facebook. They can work with any social network, Facebook is just an example.
She showed a chart indicating search volume related to
augmented reality that shows it became widely pursued beginning in 2009. Applications will be 2.2 Billion for 2015 by
some analysis. Augmented reality
appeared in the 1960s (Sutherland) as a concept, then 1992 from two Boeing
engineers using a heads up display. In
2011 with 40% penetration of smartphones in US and
She showed a whole bunch of smart phone apps, from games to city guides to navigation aids to historical views, to translation, to compensating for visual impairments (Comment – the spectrum of things people do with smart phones continues to amaze me)
They studied the Android market, getting data during June/July 2011 from 442 applications. They picked Android because it’s growing rapidly and the Android store provides the information they need to assess the application market.
Mobile Augmented reality is a very small percentage (0.13%) of all applications. 38% of those are travel related. (Lifestyle is the next category – too small to read the rest here.) During 2 months the number of applications was up 24% (Comment – this may match the overall growth of Android Applications)
There was some information on which version of Android they built for and which versions were in the market showing most were built for older environments, while most phones actually run newer versions of Android. (Comment – this is the Achilles heel of any programmable environment – people have to build for the lowest common denominator, the environment that is deployed everywhere, which creates a drag on using new capabilities)
72% of apps handle localization information, 81% use a camera, 94% of travel and local apps use GPS. The only context information most use is location.
74% of apps are free, 12% are more than $2. Only 3 apps have more than 500K downloads (all free I think if I read the chart correctly). A little less than half have been downloaded more than 1,000 times, which is considered a threshold for success.
Looking at paid vs free makes the preference for free even clearer – free apps are much more likely to have >100 or >1000 downloads.
Question: How did you get data about the downloads? (He asked another question about business models and she never got back to answering).
Question: What hinders greater use of these apps? She isn’t sure, but speculated the problem is that they are a bit awkward to use because of the limitations of the phone as a display. – it’s a bit cumbersome to use the travel oriented ap. She doesn’t believe it will be really heavily used. (Comment – about 15 years ago I remember seeing a presentation from a team at MIT that used a headset display for this kind of thing claiming that you needed to make the technology as easy as wearing glasses or it wouldn’t go anywhere) The users are satisfied, but it’s not going to be a huge seller.
Question (Igor Faynberg) – He wondered how the numbers are so large? They basically are just searching the Android Market for new apps and data on the existing apps.
Question (?) How would you get data on usage of these applications. (They use the “*” rating from the Android Market). (Comment – you really want to get information on the usage of the application. I wonder whether Android or iPhone have any “hooks” to collect data on that and if so whether there’s any visibility of it. If so, I think this would be very valuable)
General Comments:
Claudio Venezia made some general comments about the development of applications and the consequences of the lack of standards. He gave a plug for a W3C study group on augmented reality and the need for standards.
I asked the speakers to comment on a social issue – all these services depend on user access to particular technologies or devices, and while 40% penetration of smart phones is nice it’s not going to be 100%. Manfred addressed it by saying maybe we need something like the old France Telecom Minitel, which gave data access to everyone. Hui Lan said it’s basic capitalism – new services focus on where the revenues are.
He began with an example of a developer moving from Windows Mobile or Symbian to a mobile app company. Basically this person was successful, had a successful company, and about 100 times the personal income building apps as he did developing for hire for the earlier environments.
He started with two key question: How important are developers to the
strategies of Apple and Google, and what’s their role? (Comment
– I’d start by looking at what is the strategy of Apple and Google, they are
clearly quite different. Apple can give
away the environment to sell the hardware, Google doesn’t
make money on the hardware, and has a different motivation to be in the mobile
infrastructure business.
He showed the charts of the popularity of Android and Apple’s iOS vs the other mobile environments (nothing new there, Apple and Android are clearly winning this round).
Apple’s strategy was given as revolving around “Delivering a Compelling User Experience” through:
These four elements, plus all the pieces that support them are really about a compelling user experience. Apple succeeds because users are loyal to it because they believe their experience is superior to that of users of their competitors. (Comment – Yes, I’d agree with all of this concerning Apple. What’s interesting though was that the news just this morning was full of reports of Apple failing to deliver a compelling experience on their latest phone and how that was impacting their suppliers and potentially opening doors for competitors.)
The developers basically fit this strategy by building apps to support the compelling user experience. The iPhone is the largest “digital backpack” – it has to have everything you need, and the developers are what makes that happen.
Google’s strategy isn’t as clearly focused:
(Comment – very interesting – this doesn’t have the same feel as Apple at all! This could be the strategy of almost anyone in the internet. Is this really all there is?)
What’s the origin of Android? Lots of debate over it, and not clear recognition of why. Where it fits though is more clear – it fits “extending the reach”, since it enables Google to go with you everywhere. They learned from Apple to value developers and create a very similar ecosystem to support that goal.
Google’s advertising model benefits from the long tail. By being able to target they can offer advertising models that support small companies (vs mass media which only the big players can afford.) The Targeting and reach are needed to support this. (Comment – It’s not entirely true that Google has uniquely figured this out. My father was in the advertising business and for years in the 1970s ran a business that would offer small businesses targeted advertising delivering it through local TV, newspapers, and special interest radio and publications. Advertisers knew how to reach the long tail before Google, but Google has no doubt made it more efficient to do so).
Conclusions – Apple and Google have very different mobile strategies. Both use developers as a key part of their strategy. Developers aren’t simple suppliers any more, they really supply the innovation. Many have become “service providers”, because they depend on being able to serve their customers to get revenues.
Question (?) – Can you take the best of these two models to
create something even better for Telco operators? Answer – this doesn’t make sense. Telcos aren’t in
either advertising or hardware so it’s not obvious how they benefit. (Comment
– the question really suggests we are struggling with whether there’s any way for
the telecom industry as a whole to benefit from
Question (?) – Qualcomm has been pursuing an ecosystem strategy for years. Do you see operators or vendors adopting this. (The Speaker wasn’t familiar with this). The general answer was that Telecom hasn’t shown the same level of commitment to this before (Comment – many operators and hardware companies tried to build ecosystems in the days of IN and other programmable network elements. The real thing that changed in my view is that the download model provides a much easier way for developers to reach customers. The ecosystems of the past were mainly either software development companies lining up to do what amounted to custom development for operators, or solution providers who sold solutions to niche markets based on their applications on custom hardware. Smartphones created a mass market opportunity for applications.)
Question (Roberto Minerva): Apple and Google are closed ecosystems – apps work only on one or the other. Would a typical user be willing to pay more for a more open approach? Answer – probably no. Roberto re-iterated his example from my lunch with him of why there’s a risk in closed environments – imagine a Google postal service – it’s free, no stamps, but they open your mail read it, insert advertising based on what it says about you and sell information on your content to others to pay for the service. Nobody would do this, yet it’s exactly what Google did with email, and it’s a clear risk in dealing with them. The answer was it’s very much about trust – users trust the brand. (Comment – this is interesting, because violating an expectation of privacy is a great way to destroy trust. I think the real question about privacy is what user expectations really are. I am constantly surprised by learning how low they apparently are.)
He started with the observation that ICIN is a unique conference in having both business and technical people in attendance.
Operators are at a crossroads. They have to make significant infrastructure investments. Network infrastructure is absorbing a lot of the profits being made (50 Billion out of 300 Billion of revenue). This reflects the fact that the “access based model” has reached maturity.
The world is becoming more urban – number of people in cities passed rural population recently, and by 2050 more than twice as many people will be in cities. He noted that this is a rosy scenario – people move to cities during good times and away during wars and other stresses.
Managing city services is essential. Getting access isn’t the big problem, but managing it is. (Comment – this is an interesting point, there’s a lot of interest in extending the reach of infrastructure to the mostly rural undeveloped world, yet the real answer may be fewer and fewer people live rurally allowing coverage with approaches that we wouldn’t use for a big population)
Globalization means more and more things are happening away
from the local customer. Cities have to
get smarter, both existing cities which have to be refurbished, but also people
will in fact build whole new cities based on modern concepts for infrastructure
(
What does this mean for telecom network infrastructure? Two alternatives:
What are Telco’s assets for smart cities? Telco was once a startup kind of industry – early Telcos understood the network effect (value of network increases non-linearly with the number of people connected). They deployed pro-actively and sold aggressively to grow the network. Telcos have enablers not everyone else has:
Question (Dave Ludlam) – our world
(Telecom) has been largely shaped by government and regulators, what role will
they play in the future? Answer – The future
will be a “Coalition of the willing” (i.e. regulators have to enable the right
thing to happen). He talked about the
Question (ALU Person) – do emerging markets have an advantage in adopting this over mature markets? Answer – yes, but it depends on money. A new city in a place with money has an advantage, but not if there’s no money. A mature city with money and citizen participation will beat a city that lacks either the willingness or the funds to do this.
There are 3 types of Telecom Cooperation:
New media has no cooperation today, how did it come about for telco? -- It happened mainly because regulators asked for it.
New media services tend to lock users in:
He went through how social networks would have to interwork in order to allow cooperation to avoid lock in. (Comment – a lot of this is exactly what was being done a decade ago with Instant Messaging, which had many of the same aspects of Social Networking. Standards were developed for interworking, but the bottom line was in fact quite simple. The major suppliers of the technology (AOL, MSN, and Yahoo) all got revenue from ads on their clients and didn’t want to interwork, because allowing a user to export their contacts or IM to another network without using the provider’s client meant they got no revenue from that user.)
How could this be done?
It might just happen through providers opening APIs. The trouble is some services discourage portability (Comment – yes, see above) Facebook discourages all this.
Government may need to intervene. Maybe through funding they can encourage new players to open their services, but the Internet doesn’t have a tradition of this.
He closed with some what if questions:
Question (me) – IM went through this 10 years ago and never got to real interworking. Can we learn anything? Hans – this time it’s different because IM was always just an add-on to other means for other communications, while now there are people who are “Facebook only”. If this continues, something will have to force openness.
Question (Chet McQuaide) – he observed some folks use multiple social networks because they are specialized (LinkedIn and FaceBook). He also asked about the development of applications on Twitter and the fact that sometime people will use Twitter to communicate in emergency rather than dial an emergency number (911). Hans said that while he’s not expert he thinks people would want to use a single interface (LinkedIn and Facebook) (Comment – one of the aspects of IM was multiple personalities. Users had a work IM personality and a personal personality. I think of LinkedIn vs Facebook that way). Hans pointed out that the emergency communication situation carries a risk – Emergency telecom has been engineered to get priority and work in emergencies. Someone tweeting to find the location of emergency equipment or summon an EMT may face delays related to SMS or other technologies which have no awareness that the communication is urgent in nature.
Question – It has been suggested that openness has to be forced from above. Can’t it be done by making these companies see the value of openness as being key to grow the market? (Hans – maybe, but he made an analogy to the trouble of trying to get their children to share and be open.) (Comment – I tend to agree, I think it is very hard to motivate people with a large and loyal user base to open up. Consider Apple. Consider the fact that Microsoft still has a virtual monopoly on the office suite because of the proprietary file formats of Word, Powerpoint, Excel, etc. )
He started with a poll of how many people work for operators (maybe 1/3 of the room).
Operators are not known for innovation (way too slow introducing new things. This is because operators do interoperability and rock solid operation up front and it’s hard). (Comment – maybe, but I think it’s mainly just that operators are big mature companies that have a lot of infrastructure that has to move.)
Operators know this. Smartphones have changed the game. Users expect innovation from apps. But there are lots of problems:
He notes there is a shift in the industry towards operators working together to try to solve this (OneAPI, OMA APIs, etc.) (Comment – yes, but unless the result is a lot different from now I would expect the developers to use this only if they have no alternative.)
Both Web developers and mobile App developers benefit from APIs to mobile networks. Web developers are potentially a bigger market than device apps. Today that’s not the case – the web community ignores any capability of the mobile operator except transport. The potential is that the mobile network has assets to offer.
Will this produce revenue? The answer seems to be maybe. The problem is that companies can’t charge for many capabilities (because of “free” equivalents), but he felt the higher level APIs might be something they could charge for. The ability to have things work consistently and simply and reduce the effort the developer has to put out is the value proposition.
Standardized APIs are important – the first focus is mobile networks, but they expect what they do will also apply to fixed networks, DSL and Cable.
OneAPI has grown from a small effort to an industry standard. GSMA and OMA have agreed to cooperate. They started using ParlayX, but moved quickly to RESTful APIs as the answer – they have to look just like web APIs.
GSMA actually built their own
gateway. They have deployed a gateway in
An interesting comment – It’s much better for the developers to get connected to everyone through one gateway. (Comment – yes! The biggest problem with Parlay, et al is that they required a business relationship between the application provider and the operator. He said this can take 3 months, I think he’s being optimistic).
Location – you have easy access to device based GPS, but the
API will let you do location on friends (A “find me a pub” in the
Payment is important because it gets access to payment from any mobile customer without having to have the user put a credit card in. It’s also critical for young customers who may not have credit cards.
Version 2.0 adds Call control, Device capability, and Data Connection Profile (what network am I on and what will it support.)
V3.0 adds Femtocell support, user context, and anonymous customer references (i.e. you can create an anonymous reference that is unique to a particular service that will allow the service to identify you without giving away your mobile number.)
They have about 300 developers building applications for a showcase. Some are mobile, most are extensions to web based services.
They view this being done both by operators (and vendors)
exposing OneAPI APIs directly from their own service
development platforms and networks, and via gateways. He talked about a desire to have more
regional gateways like he described in
How can operators help? Adopt the APIs, and expose them, preferably through gateways again.
Question (Chet McQuaide) – is the location API robust enough to use for emergency services? (Yes)
Question (?) (From someone who has been building apps for a long time and has been burned by promises from APIs) He asked how you avoid becoming a commodity if you work through only standard APIs. His answer was that proprietary APIs are ultimately limiting – there isn’t a critical mass.
Question () – Are operators really not innovators? He gave examples of why they would always lag.
Networking changes society – empowered people, changing businesses).
What are the enabling technologies?
(Comment – I would add smart programmable endpoints to this one. I think those four together are a disruptive change.)
How has cloud computing evolved? The first phase was all about consolidation of computing in large data centers – how could companies manage data and computing more efficiently. It’s a utility – remote delivery and self service. It does mean a new business model – you don’t buy a box, you pay for a service as you use it. (Comment – interesting. He’s right, and I’m not sure there aren’t psychological and operational factors that either hinder or help it.)
The second phase was about rationalization – common architectures for networking and storage, also expansion from enterprise to consumers. Many consumers use the cloud without knowing it.
The third phase is a migration up the value chain. Clouds no longer just provide bits and cycles but provide platforms and services.
What’s next: Personalization (he gave real-gaming, 3D and HD video, virtual reality, each needing specialized resources). (G)Local services are another trend: Interactions with people and things nearby, but working everywhere. Any time Anything, Anywhere (high availability, quality, and security).
This is where the Telephony industry can really play in this space.
He cited the CONST principal – you can’t optimize computing, storage, and networking at the same time. At least one will be sub optimal.
Concentration is unrealistic, distribution must be part of Cloud 3.0. Quality of experience is critical and needs to be measured and managed. Network performance is essential to achieving this. The network has to adapt to the communication needs of the application.
What are the bottlenecks?
Middle mile bottlenecks have been addressed with things like Content Delivery nets, Caching, etc. The last mile bottleneck needs to be addressed with mobile prioritization. (Comment – yes, provided there’s enough bandwidth. Sometimes it doesn’t matter whether you have priority or not because the network just doesn’t have the capacity.)
All this was really in support of accelerating mobile content delivery by bringing the content delivery application into the network rather than having it operate over the top. There are advantages in this, but I’m sure there are also costs as perceived by the end content providers.
Question (Dave Ludlam) – Have you
addressed disaster planning? His answer
was that distribution addresses that – multiple sites give you robustness. (Comment
– yes, provided the pieces can work independently) He said that Google credited their
distribution for how well their services survived the earthquake in
Question (Chet McQuaide) – is what is the nature of distribution – hierarchy or mesh? May be hierarchy initially but will need something more robust ultimately.
He talked about all the change that has come with 3G and 4G networks. His message here was networks have to expect the unexpected – OTT applications, video, apps “in the cloud”, etc. There are many diverse demands for the network.
The next generation network of radio and fiber introduces a need for capacity and cost efficiency. Self serve networks enable trading and brokering to meet the diversity of demand. Networks have to be engineered for worst case, but most of the time this is unused. Virtualization and trading is a way of avoiding resource waste. Operators have traditionally shared physical facilities (e.g. towers), but the business is moving towards deeper sharing (multiple network operators on top of the same physical networks, 3rd party services, etc.)
How can a slow moving network
design support flexibility? The answer
for them is virtualizing the pieces much as IN
virtualized service logic in switches. (Comment – I think the analogy is fine for
control pieces, but I’m not so sure about the actual connectivity. Does it really do anyone else any good if I
have unused capacity inside my radio access network? I don’t know.
Some of what I’ve heard about serving high bandwidth streaming even
challenges this – it does matter where the streaming server is. The fact that someone has idle capacity on a
streaming server in
He proposed a complete architecture to basically broker networking components between operators in a wholesale market. They have looked at how to handle things like payments and QoS.
“Liquid Net” – they have introduced this concept as a way of supporting video on demand. Many such functions exist today. (He talked about adaptive antennas as an example of a way to refocus capacity.)
He talked about a lot of partners in EU projects on this work.
Question (?) -- The previous talk introduce the notion of network awareness, doesn’t this contradict virtualization? Answer – network awareness isn’t intended to make applications know about the network, it’s intended to make sure the network can support their needs up front.
Question (Roberto Minerva) – Network Virtualization has a lot of interfaces and that means APIs. Have you worked out those interfaces? Second point – the talk was about connectivity, but doesn’t the problem include storage and processing. Answer – yes, the network will integrate all 3 and they have to be orchestrated to deploy cloud services. They are working on APIs (He cited OneAPI). Interfaces need compatibility with existing standards where they apply (e.g. MPLS).
Question (Igor Faynberg) – What prevents mobile devices from being single points of authentication (i.e. your mobile device is your passport), and what prevents operators from offering that capability? (Comment – Igor has a special interest in identity management) Answer – we have been talking about SIM card based authentication for some time and federating that identity across operators, but it hasn’t taken off. Maybe this will be much more successful in developing countries where people don’t have credit cards or passports. (i.e. the problem is that people in the developed world have too many conflicting structures to do identity management already.)
Answer – it is coming,
Igor – he tried to push this concept in IETF. Every time he did it he got push back that there weren’t standard APIs to access SIM based identity that would support this notion. Can we fix this? Answer – WAC is trying to develop an identity API.
Question (Chet McQuaide) – Sessions have addressed network evolution, cloud evolution, and service migration, but the past suggests that the key to success is in service management. Do you have any insights on how to do this in the new environment? (Big pause). One response was that service management becomes data management in a way because that’s the real problem with the amount of data service providers wind up with on users. Another response – we need stricter and better defined lifecycle management for services, especially those that store data – track where it’s stored, make sure it’s deleted when it’s not used.
The poster session had 7 presenters accepted from the conference submissions, most of which were submitted specifically to that session. We also had two presentations from sponsors (OMA and Nokia-Siemens). The following summarizes what I learned from visiting some of the presenters.
This presentation showed how to save energy in the radio access piece of carrier networks. Carriers deploy multiple technologies and a dense mesh of base stations to meet the peak load, but most of the capacity is idle most of the time. This poster showed how a network can become aware of the actual load and determine what portions of the access network can be shut down. They demonstrated it with WiFi, showing a network shutting off one base station and associated routing when demand was lowered by stopping a video stream to a laptop. The remaining applications on the endpoints connected to the base stations shut down are seamlessly migrated to other access points.
This presentation concerned the new “.post” top level domain, which was created as a space for secure mail and messaging services (free from spam, viruses, etc.). A major need for this is secure identification for individuals. This poster proposes to do this using a USB “Dongle” as a security tag, combined with individual login to allow a user to connect via a secure IMS based network (not public internet) which can then allow the user to compose or access messages with the assurance that the identity of the subscriber is known and there is no opportunity for introducing malware or spam without the subscriber’s cooperation. (Comment – this isn’t quite equivalent to the complete absence of malware since the subscriber’s computing platform can be compromised) They demonstrated this using a recorded session, and showed examples of the dongle.
This poster addressed the issue of trying to deliver a very high resolution media stream to a limited resolution device efficiently. Instead of trying to deliver the full information available to the device, the video source is spatially segmented, and the user is given an interface where they can select what portion of the picture and what magnification they wish to view and only those segments of the original picture needed to construct the desired view are sent. The desired view may be assembled in the end-user device from multiple video segments. The scheme also allows combined views to be available (e.g. a low resolution version of the full video that is delivered when the user wants to see the whole picture on a low resolution device, rather than delivering the full stream and requiring the device to stitch it together and reduce the resolution.
The scheme was demonstrated using a tablet to control scaling and a laptop to display (mainly because they have not implemented the video stitching on the tablet yet). It is simple and intuitive and meets the goal. One thing that isn’t clear is whether this can be done for real-time streams (the demonstration and discussion all dealt with recorded media which had been pre-processed to produce the segmentation.)
This poster showed an update to the Ericsson composition engine, which was demonstrated at the last ICIN. New for this presentation was a different version of the application server which addresses an important problem – performance. A common problem of Java service execution environments is that the synchronous nature of Java Beans means that each service execution consumes an execution thread in the server. Threads are typically quite limited, causing services to block waiting for threads and limiting capacity. They solved this by retaining the synchronous program flow but implementing a way for the service to cache state and relinquish threads between action, which allows greater utilization and reduces delay. They presented performance data showing significant improvement.
(Comment – this is a
real problem and an interesting solution.
It’s not what I expected reading the abstract for this paper. Another alternative is redefining services as
Asynchronous (e.g. event driven state machines where each state transition is a
short Java program). My own experience
in mapping services to Java is this works better, but it is not the way the
enterprise world uses Java so going that route means rolling your own
environments and tools, which is why most telecom products use Enterprise Java
Beans as an execution framework even though it is not designed a communication
service platform environment.)
This poster is really more about the testing facilities used
by the team. Basically they used a
federation of testbeds in multiple labs to construct
a large mesh network which could be provisioned down to layer 2 through
messaging. The federation of testbeds and procedures for remote use are both parts of
sponsored projects to enhance communication research in
This paper was on a simple service creation environment that could be used to allow enterprise users to implement a business process using mobile devices (both user devices and “machine” endpoints like sensors). They demonstrated a scenario for a package delivery service that delivers medical supplies which have both time delivery constraints and temperature control. Truck drivers carry a smartphone application that shows them were to go and what to deliver, and trucks have sensors that communicate the temperature of critical shipments. In the scenario demonstrated, the truck relays a temperature alarm which is routed to a service center where a dispatcher gets a display. The dispatcher can examine the temperature history of the shipment and delivery constraint and decide to cancel this particular package and update the route for the truck to deliver the remaining packages, and at the same time schedule an express delivery from another truck to replace a needed shipment. Internet mapping services are used to compute best routes for the trucks and display them.
This was an interesting presentation of how smart mobile applications can replace and improve what today is done with specialized applications and devices.
This presentation focused on how to apply enterprise application development methodologies. The presenter described a particular method of analysis and implementation for enterprise services by Siemens, and described how it can be applied to designing and provisioning networking for machine-to-machine to support the business processes of the enterprise.
She gave an introduction on where the industry is going – Voice is de-emphasized (lower revenues, no longer the main attraction, becoming just a function, not the bulk of the traffic.) (Comment – yes, but it’s still most of the revenue for many operators)
The growth is in entertainment (Multimedia, HD, 3D and TV are now mobile, TV and Mobiles merge and multiply). Sadly she observed that Steve Jobs died earlier this morning and noted the role he played in changing the communication industry.
“The IMS Shield” – a lot of IMS was designed to open the network but protect the operator. Analysts have pretty well written off the “protection” functions, yet nothing else really exists that is as good. “What the internet lacks, Telecom should fill”.
In next generation networks, a name URI replaces a phone number as a means of identification. Phone numbers will remain, but we must support URIs, and unlike phone numbers the domain name piece of the URI is typically owned by an enterprise that is responsible for assigning the user name piece of it. For a communication service the domain name is the important piece in locating where a URI is connected in the network. This is done by Internet DNS.
A problem today is that operators allocate pieces of domains they own as identifiers for endpoints in their network, but this results in URIs that reflect the operator, not just the real domain owner (typically an enterprise). This is not desirable.
Phone numbers have a hierarchical derivation that shows how to route them (country, region, local). URIs have somewhat the same structure. (Though most top level domains do not designate anything about region)
Number portability has changed this for phone numbers – ported numbers violate the routing conversions of normal numbers, and this has been solved by using a database with ported numbers to look up “exceptions” to normal structure. The same approach could be used for resolving name portability in IMS (i.e. allowing the enterprise to keep their normal domain name within an IMS network and have it route properly, and in particular have the company keep their “name” when moving to a different network provider). Name portability also implies that even within a domain, different individuals or services might be served by different IMS providers.
Another aspect of the challenge is that where something is to be routed depends on the service portion of the URI (e.g. SIP one place, HTML another, etc.)
Distributed DNS is used to solve these problems. DNS records can point to other servers to
resolve the full address. By using this
you can have the enterprise DNS (or a public DNS hosting the enterprise domain)
point to the IMS operators DNS to resolve how SIP or other services are to be
routed to the user. Because the
enterprise’s DNS has the first resolution it may resolve different
The DNS query isn’t service specific, but the records retrieved contain information on services available. The records are used in a rule based method, where the rule matching the correct service may specify modifying the domain name portion of the URI and pointing to another DNS server (e.g. the IMS provider DNS) to correctly route this request. He showed a lot of scenarios for how to essentially use the DNS mechanism and the existing rules for DNS resolution to resolve URIs and address the need of enterprises to control their domains.
The paper was actually written by a colleague in China Unicom.
Rich Communication Services were introduced along with IMS as a standard suite of services that IMS would support. They aren’t universally deployed yet, but getting there. Advanced services are a key need from operators to compete. (RCS really represents an attempt to define a standard offering) The current RCS release deals with instant messaging, file transfer, and multi-media calls. Very basic services people are familiar with.
He showed some simple RCS scenarios. One of the new things introduced in RCS is a way for the user to discover what capabilities for services can be supported. This was important, because the experience with MMS and video calling showed that users assumed these services would work, then when something was wrong (e.g. the called user’s device wouldn’t support it), and the service didn’t work, they wouldn’t try again. By doing service discovery the user knows up front that what they are trying to do will work. This includes being sure the user has both the capability and the communication support (e.g. don’t make a video call to a user who has limited bandwidth at the moment due to where they are.) (Comment – yes, this is very much like capability negotiation in SIP, H.323, and other internet protocols. It works well mechanically, but I don’t know whether it really solves the user experience level issues (e.g. will the user be frustrated by not understanding why he can’t send a photo even though he can make a voice call, because his current location doesn’t support the bandwidth needed))
RCS standardizes the services, how do Telco’s compete? Basically they compete on quality of service, price, etc. (i.e. the way they compete on voice.) (Comment – yes, but this isn’t entirely satisfactory. Competition on non-functional aspects of service tends to result in competition on price only eventually)
He talked about the advantage of universality as a contrast to the Internet world where the only universal and interoperable service is really email – in contrast if you want social networking you have to pick a provider first and they don’t interoperate. (Comment – this is certainly true for many services, but curiously not all – Web2.0 has made some progress in allowing different services to be interfaced in new ways to solve some of these problems.)
He talked about the problem of supporting IM or Chat in the
internet – people need multiple IM clients and worse yet need to be registered
with multiple providers. He also talked
about the growth of text messaging, proposing inter-operability as the reason
why Text adoption in the
One of the keys for RCS is to open up APIs that allow the services to be extended and customized.
How is this happening? There is a group of major European carriers committed to deploying RCS in their networks. (RCS-e) Another thing being done is RCS apps, which can be downloaded on smart phones and give RCS services “over the top”. (Comment – this is very interesting, and provides a way of achieving some degree of universality quickly, which is probably important. Without this, I suspect that downloaded applications not compatible with RCS would quickly capture the market and make it difficult for the network operator to re-capture these services.)
Question (Hans Stokking) – have you looked at interoperability with existing internet based services? Answer – yes, but this isn’t the focus now. One problem is that until RCS is widely deployed there is limited interest from the internet providers. Answer: They make the APIs and interfaces publicly available enabling anyone who wants to interoperate to do so. (Comment – this is probably the essence of the problem. The internet world is used to a much different model of adoption, where every user is free to adopt any service by downloading the application. The Telco world model depends on the operator to have the service installed first, which causes users who want it first to go for alternatives.)
Question (?) – have you thought about pricing? Answer – yes, and it has to fit within the
carrier’s overall offering. Latin
America is much different from
CSNA stands for Community Social Network Aggregator. Like the previous paper, the real author of the paper is a second author from NS India. (Comment – yes, this is one of the challenges for a conference, we really want the people who do the work here, but the reality of international travel is that often they cannot be. It is a real indictment against our industry is that decades after video conferencing became available, we still don’t routinely use it to allow all the authors to attend and participate.)
RCS is a platform for this service and he will focus mostly on the implementation of the service. NSN supports RCS APIs and a methodology for extending services to provide differentiation (Comment – interesting – the last speaker really focused on the value of RCS as a universal service. Here we see a focus on differentiating a particular use of RCS)
What CSNA is about is interworking RCS with “legacy” services. (e.g. Facebook). (Comment – I don’t know that I ever heard Facebook appear in the legacy category before, though as I looked at the slides the concepts here seem to have originally come from the IM world. He talked about mapping buddies, status, and presence, but not using the terminology of Facebook and other more modern social networking services).
He showed a scenario where the user could select the social application and tell it to interconnect with Facebook. One of his friends (interesting, he said “buddy), logs in, and as a result he gets a display of status. He showed chatting between RCS and Facebook. (straightforward, but a different interface on RCS).
He showed uploading of pictures and sharing of status between Facebook and RCS.
The implementation uses a CSNA server, which looks like an endpoint in RCS (e.g. terminates SIP session). It is a gateway to external services, interfacing to Facebook and other services. It can also interconnect to gateways to display advertising or other networks. To do this it implements the APIs and protocols of Facebook and others (e.g. XMPP for presence). (Comment – this will work, but depends on the cooperation of the internet service (Facebook) to open those interfaces and maintain them as stable. There are also scaling issues that will arise if this service is popular. He also pointed out that this is converse to what was advocated earlier (allowing the internet service to do the interworking with RCS via RCS APIs and protocols)).
He went through the detailed implementation scenario. One thing that was interesting was that to do presence they wind up downloading the user’s full Facebook friends list and registering for updates for all. I believe this could become quite cumbersome for users with a large set of Facebook friends.
Question (?) – We did this in a university 3 years ago. The problem was rapid changes in interfaces from Facebook. In 2 months it changed enough to break the service before an important demo. Answer – well, it doesn’t change as often now, but the only real solution is contracting with the IP service provider to maintain a stable interface. He also said that changing the interface violates good practice, You want to introduce changes in a compatible way. (Comment – yes, this is the “right” way to do updates, but it is extremely hard and unless there is some strong motivation for the provider to do this they won’t.)
Question (?) – You show how to display Facebook friends on RCS, can you do it the other way around? Can you import your phone address book into Facebook? Answer – yes, but you have to do this via applications written for Facebook. (Rebecca said usability research indicated there was a desire to import Facebook contacts onto the phone but not the other way around)
General Questions:
Question (?): How do you charge for all this in real-time? Answer – yes, there has been a lot of thought about charging for RCS. In general it’s like phone charging – Video would be an increment over voice. The IMS supports real-time charging. A real benefit to RCS is it fits the Telco model. (Comment – yes, but as soon as you allow interworking with Apps doing RCS over the top, you probably either confuse users or lose control)
There was some more discussion on the challenges in
charging. As long as the user perception
is that the internet is free and unlimited it is difficult to charge for
services in the mobile world that would be free in the internet world, because
competition from over the top solutions undercuts paid Telco applications. Max Michel indicated that
(Comment – this is a
key problem for mobile operators. In the
Question (Anders Lundqvist) – what’s the timeline for implementation? RCS is only going to be interesting when a reasonable number of operators implement it. Answer – some operators have communicated timelines.
Apparently there are agreements within country to get all operators to support it in these timelines. They expect some variation as we get to implementation but there is strong commitment. Another factor is the propagation of SIP Stacks onto devices. LTE requires SIP so all will have it. This has historically been a problem (implementing SIP.) (Comment – I’m surprised at this because SIP isn’t that hard, and I’ve seen implementations on programmable phones for some time.)
Comment by someone who spent lots of money on implementing RCS – The real problem is charging. Consumers won’t pay for this, because they see that these services are free on the internet. What’s the commercial model for the carrier to recover the cost?
Answer – we need new business models. Skype, Google, Facebook, et al get money from other sources, Operators need to play the same game.
Comment (Anders) – you have one shot to get this right, if it’s wrong it will fail. Question to speaker from Telefonica – you are within 3 months of launch, how will you get it right? He said one opportunity is pitching this to subscribers who are not paying for mobile broadband. (Comment – this is interesting, because for some the “all you can eat for a fixed price” model of mobile broadband commonly in use is a disincentive because the price is too high.) Another audience member asked why we can’t behave more like the internet companies – launch a preliminary service to demonstrated it and gauge demand, then figure out how to monetize it.
Rebecca’s summary – this was supposed to be a session on the technology of RCS. The business model is clearly a major interest and a good topic for the next ICIN.
Niklaus Blum was the original presenter, but couldn’t make the presentation.
The focus of this work was smart information sharing, allowing a user to share information via a cloud, and get notification of changes to shared information. Notification depends on user preference and device characteristics.
Push services allow immediate notification to users and can save network resources.(Comment – yes the alternative is polling from the end user device which is not only slow but generates a big load of unproductive polls focused on the information source).
He gave a taxonomy of push services including MMS, SMS, Email, Google C2DM)
They implemented this service as “SIS”. The implementation has an identity management component which takes care of identifying the end user to internet services being worked with. Another major component is the Virtual Bookshelf, which caches information that is being shared with users. The VBS is really what’s responsible for pushing information to users.
The work flow receives messages from internet services (e.g. Google, Flickr, etc.). A notification service is triggered if the message is relevant (i.e. reflects a change). The notification service uses policy rules to determine what changes will require a push update, and consults information on what channel the user wants to receive the update with. The messaging service will then generate the right message to relay the information. These components are all accessed via RESTful APIs which allow the building of applications.
He described a service called Nippon Post that allowed notifications to users of relevant events. He went through a complete message flow demonstrating the architecture.
Question (?) – Will it be possible to start applications on the end-user device via a Push? (Apparently this can’t be done with iPhones now but can be on Android.) Answer – they didn’t focus on what was being done on the client in response to receiving the push. (Comment – this is a real quandary. Long ago when implementing internet call notification and control we wanted to push the call notification and invoke a PC application in response, but doing this securely is difficult) Also, SMS will work for push, but it’s best effort and can be very slow. Most of the time it works well, but there’s no guarantee.
The web is changing – It used to be all about human to machine communication, now social networking has supported human-to-human communication. Currently we have many different services that are not compatible. A major concern is privacy he described how Facebook, for example, manages to trace and build profiles for users who have never been to the site based on the actions of people who know them.
Technology is also changing. We have lots of devices in our homes and our personal devices including sensors and cameras. We want all this to be accessible, but still controllable. (Comment – TV and Movies are full of scenarios where smart hackers manage to active cameras, microphones or other devices to spy on others. I don’t know how realistic these scenarios are, but this, in combination with the number of both government and commercial agencies who are interested in tracking users is a scary scenario.)
What he said was that users want to participate in social networks and even share some of those home capabilities, but still retain more control over their identity and their devices. The focus of the rest of the talk was really on how to bring devices and services in home environments into social networks without compromising privacy or identity.
Claudio overviewed a lot of the relevant standards. This work is part of a big EU project called “SOCIETIES”, with a lot of participants and over 10 Million Euros in funding.
The objectives include the creation of communities for sharing, which can be done dynamically and creates a Community Smart Space (CSS), where users can share resources and access services.
To give a concrete example, imagine that someone wants to interconnect their home environment with another user’s in order to allow devices to be shared across networks. This can be done by a VPN approach (i.e. gateways in both “islands” which tunnel through the network) to allow everything in one name space and be accessed from either place. There are products that support this today, and it works. There are standards for the protocols, but not all home gateways support it.
A second approach uses an application to integrate two LANs with bridge software. The application runs on one network and exposes local resources via SOAP. The software has to be installed on both sides. This is easy and the software is open source, but it’s less flexible, and it requires a node in each LAN that is always up. (Comment – not sure, but this might also require a static IP address or some clumsy work around for it)
He continued describing efforts in standards to better support this kind of communication.
Questions: Many people don’t care about the privacy issues in internet services, what’s TI’s position (why are they working on solving a problem if many don’t care about?) Claudio’s answer was that people should care, but that the problem isn’t unique to social networking, and in fact users should be more concerned about essential internet services (e.g. search) which collect private data even though the user is not doing anything that he or she expects to be shared.
This is joint work with
Smartphones have exploded in
popularity (According to Gartner --67M in the
Smartphones as a category existed for 10 years as a merger of PDA and phones. Most had keyboards of some sort. The first big innovation was touch screens, which made them more physically attractive, but the big surge came in response to Apple’s ecosystem of downloadable apps. (Comment – yes, but all those earlier phones were programmable too, what Apple really did was less about the phone than the App store, and curiously enough they were uniquely positioned to do it based on their experience with iTunes.)
What’s the attraction: Mainly that they support a completely new and improved user experience. Low level reasons include lots of processing, advanced OS, lots of sensors and display capability. The high level reason is the applications.
The market is now organized around the smartphone platforms. Each maker has an ecosystem. This is a terminal centric approach and contrasts with the traditional telecom approach of putting the service in the network and treating the terminal as just a way to access the user. Their work compares these approaches for “context aware services.
What’s a context aware conversational service? Basically it is one that understands the user’s context and uses that to tailor the service to the context. What’s the premise? Adapting interfaces and behavior makes the service more natural for the user.
What’s context? -- It is identity, location, activity, device characteristics, etc. Context also includes lots of intangibles that are hard to measure (user’s mood, ambient noise, etc.).
Context awareness can be used to anticipate needs (e.g. divert
calls while I am in the car, reconfigure my menus to make choices relevant to
my context easier to select.) Context
can also be used to trigger services – when I arrive, notify someone. Reminder services – learn what I usually do
and remind me if I don’t do it. (Comment – this begins to approach the
“creepy” category)
As an example they used “Communication Diversion”, an umbrella term for anything which diverts calls based on what you are doing. (He gave an example of diversion rules allowing a user to say divert calls to voicemail while driving, unless the caller is someone who is important to me and it is an emergency.)
They compared implementing this as a terminal centered or a network centered approach.
He presented some scenarios for things that challenge the implementation,
like having multiple devices, like having your device disconnected from the
network. Network implementations are
desirable because they aren’t dependent on reaching the terminal and don’t have
to change if I change devices. (Comment – this is all sound and logical, but
it may be less important to the user than how easy it is for the user to
obtain, activate, and manage the service.)
Another dimension of comparison was on where the context information comes from. If it’s local to the device, the terminal based solution is superior, but context might come from the network, such as QoS (or in this case the status of the caller), in which case network implementations are easier. There are no real standards for context data. If advanced processing is required the suggestion is that doing this in the network is better. (Comment – the reason given was all about processing and electrical power, but I suspect in fact the real deciding factor will be how easy it is to update the processing. Unless installing new processing in the network is easy, I expect it will be much easier to plug in some kind of analysis module and replace it on the terminal.)
There are other issues. Evolvability is easier in the network (It is hard to update lots of terminals and get them updated). Scalability is better in the terminal (each one increases processing power. His claim was privacy and security would be better for network approaches. (Comment – I don’t know here. The assumption is that the operator is trusted. I think it depends on the particular information processed. The operator approach has the disadvantage that it aggregates a lot of private data in one place, making that database much more attractive to attackers than an individual phone, on the other hand I doubt users want to download information they consider private to the phone of someone they call so that user’s call diverter can decide whether or not to let them through.)
Question (Heinrich) – He knows call diversion is important, but for 10 years he hasn’t personally found a reason to use an application like that (he was talking specifically about the divert calls while driving)
Question (BT person) – context applications are hard for
users to understand. How do you make
this easier for the user to understand (and as a result more popular)? Answer – we have to help the user understand
the service and help the user properly configure it. (Comment
– this is hard. I think this is really
an area where innovation is needed to create some new way to manage these
things. Look at some of the things we
now take for granted in interfacing to computers that were radical innovations
at the time, like WYSIWYG word processing, spreadsheet, GUI interfaces, touch
screens, etc. Re-imagining how the user
interacts with an application can in fact make it much easier for the user to
get it right and understand what they do.
John opened the session presenting a slide he created by uploading all the papers to a web service named “TagCrowd.com”, which creates a slide highlighting the repeated themes. Cloud and Services were standouts, along with a lot of other topics that are important to us.
Co authors for this paper included
He started with a brief overview of cloud computing, saying
that it will grow from 40 Billion to 240 Billion (presumably dollars) in
2020. Smart phones will double in 5
years, and
What are cloud benefits? -- Reduced CapEX, reduced OpEx, and shift from CapEx to OpEx. There is a potential for new services offered from the cloud for new revenues and this is what they will focus on. (Comment – yes, this is interesting, but of course everyone is playing this game and when everyone tries to get into the same business it is usually not a good time to enter.)
What are the barriers to adopting cloud solutions: A survey revealed security, performance, and integration with other applications as key barriers, with others such as physical location and fear of “lock in” being very significant.
What is VISION Cloud (an EU program that is a consortium of 40 partners). Their objective is improving global delivery of rich data and storage services using a smart cloud architecture. (Comment – interesting, this is clearly focused on data and storage, not necessarily processing or communication). Some key aspects:
(Comment – now I understand why the focus on storage. It’s not storage as we know it, it’s smarter storage)
How does one develop a business plan – an idea generates a model, a model generates a business case, and from that a plan is generated to implement it.
They used Osterwalder’s business model framework (too complex to explain here, but it outlines all the fundamental things a business has to do, like targeting customers, getting customer relationship, handling distribution, and controlling costs.) He used the model to develop a business model for an iPod
He then used the same approach to look at how operators could develop a service offering for cloud computing. The framework is just a way of framing discussions about the feasibility of doing something, not a definitive plan. Based on the discussion he felt that it would be feasible for Telcos to provide cloud services and that Osterwalder’s business model methodology can be applied to target application. The next step is to tweak the elements of a model and produce a business case. (Comment – I’m not sure I got much out of this – too abstract for me I guess. I’d like to see more concrete analysis of where the Telco plays).
Question (Anders Lundqvist) – have you used this same model to see where end users (small/medium enterprise) exploits the cloud. Answer – no, too many opportunities. (Comment – yes, this is why I’m uncomfortable at the level of “cloudiness” in this vision Anders is making the point that you need a business model for the end user which proves the service will be useful in order to decide whether you can actually sell the offering.)
Question (Igor Faynberg) – Did you look at the potential for a Telco to be a “Cloud Auditor?” A Cloud Auditor is an independent verifier that the cloud provider conforms to their advertised requirements. Answer – this is a possibility they haven’t looked at. (Comment – I think they could do this but I don’t know how much money is in it.)
Question (Dave Ludlam) – early in your presentation you made a distinction between accounting for access vs accounting for consumption. Answer – he struggled a bit answering this because colleagues worked on it but his point was really that they have the potential to account for different things than traditional IT and generate new revenue streams.
Question (?) Would it be more appropriate to compare clouds to things other than iPod? (yes, but it was just an example).
Computing may some day be organized as a public utility
(John McCarthy, MIT, 1961). (Comment – not far fetched at all, in about
1968 I attended a conference at MIT and met someone who would become a
colleague who was promoting something called a computing utility – plug in and
use it like electricity or phone service – that was Bob Frankston, one of the
inventors of Visicalc).
He went through trends in computing – virtualization of hardware and even software, moving towards a cloud implementation, and looking at both private and public clouds and hybrid architectures that exploit both seamlessly.
The Legacy of Value added services (voice and internet) is silo implementations with some commonality in networks and operations. These are migrating towards cloud based implementations (he talked about how to do that including looking at business cases for it.
Retiring legacy products is important, as are factors like your strategic intent. Cloud solutions may be particularly attractive to launch new services with lowest investment and to handle existing services with low usage (either because they are end-of-life or just low penetration).
He gave an example of replacing 8 frames of equipment for
MMS, WAP, and SMS with two (redundant site) frames of cloud (Comment – while I won’t argue with the
benefit of a cloud architecture, I find you have to be VERY careful in looking
at comparisons like this. I’ve been
burned several times by neglecting the impact of
He went through a basic structure of Infrastructure-as-a-Service to virtualize these services. The biggest problem is the SS7 signaling. The trouble is it requires a physical card (Comment – amazing anyone still uses it. Sigtran is still the solution, but we still have networks that implement T1 and lower rate SS7 which requires a dedicated card to terminate and complicates virtualizing the architecture)
He illustrated the use of public and private clouds to handle peaking in SMS load – use a private cloud to implement most of the service and keep all the private data, use Public cloud services to handle routing during peak loads, and avoid any regulatory issues related to exporting private data into public clouds.
Question – what’s the problem with SS7 (he answered that it needs a physical link, the questioner indicated you could use Sigtran to gateway the SS7 into IP and then just interface in IP). (Comment – other than cost, I seem to recall there are operational problems introduced by the gateway).
Question (BT person) – You made the OSS/BSS disappear in your implementation. Can you apply cloud computing to do this? Answer – yes, but many vendors tie their implementation to particular hardware which makes this hard. (Comment – this is probably mainly a legacy issue, old systems, but it’s real). The real intent of the question was to point out that OSS/BSS may be just as much of an opportunity as the actual service.
Question (Hui-Lan Lu, ALU) – Telco systems are traditionally 5 9s reliability, do cloud systems match up? Answer – they aren’t proposing migrating to commodity servers, the servers are still carrier grade, so they haven’t tested this. (Comment – they might reach “5 9s”, but having worked a while in that business a real issue is whether the networking involved in the cloud solution introduces new points of failure that impact solution reliability. That is probably the real limitation in replacing the physical SS7 links with a gateway!)
Question – we need to consider how users connect to the cloud. How about using a Cloud based signaling infrastructure to connect. Answer – yes. (Comment – see above!)
In 2008, Nokia’s value was 50 Billion, now it is 15
Billion. At the moment the most famous
company in
Earlier research indicated that a hybrid cloud architecture works well when traffic varies significantly so as to handle large loads without overprovisioning. (Comment – One thing I find interesting is apparently nobody considers the solution of everything in the public cloud. Apparently it’s regulatory requirements to keep data under control that drives this solution. I’m not sure there is a strong business motivation for it unless there’s a significant pricing benefit in owning part of a cloud – you don’t want to take on managing it!)
They looked at strategies for MVNOs
usage of the cloud and strategies of cloud approaches through interviews with
key players They were asked about key
attributes of their business, what they felt were the benefits of cloud
computing and the barriers to implementation.
They used a “
The most important issues for MVNOs
were Cost benefits, cross-location architecture, high performance, and carrier
grade implementation (interestingly enough very few really needed carrier grade
In the cloud concept the big concerns were Data security,
Performance, and Carbon Footprint. (Comment – as I understand it national
requirements on businesses in various countries in
There was consensus on what should move to the cloud first, and it was Billing! Lots of question marks about the network pieces involved in real-time control. (Comment – Billing isn’t surprising, but I’m surprised that OSS didn’t pop out too since it’s not really either 5 9s nor does it require special interfaces usually at least, but I believe the problem may be that people don’t want to touch it because of bad experiences. Even the real time services of an MVNO, like Pre-paid, are I believe candidates provided you can handle the network interfaces without introducing unacceptable risks of failures.)
They did a proof of concept showing a hybrid public/private implementation of customer relationship management using a standard commercial product and keeping the data in the private cloud. This was done to prove that they could use this approach to handle the application and implement scaling to handle peak traffic.
One piece of this was that even in the cloud you need to worry about location – there is high traffic from the private DB to the cache that serves the application instances in the public cloud, and bandwidth and security are both concerns if you have to reach across the public internet.
The advantages of the cloud include higher performance (cycles, not reliability), and elasticity. Sustainability is a concern (can you sustain a performance improvement. Avoiding CAPEX is a big issue for MVNOs is a big issue. A rule of thumb is if you spend $1 on IP infrastructure you spend $8 to administer it. This makes minimizing what capital you put out really key.
Concerns include data security, especially putting data outside of the country. Performance of real-time was a concern, as was the fact that cloud solutions require different staff expertise. (Comment – the concern over boundaries is interesting. I guess the world really isn’t going to be “flat” any time soon!)
Question (Claudio Venezia?) Is software licensing a concern? Answer – that’s a possible issue, he has heard that some solutions run into a problem that they have a license to use software internally but cannot run it on in their application on a public cloud. Someone else from the audience indicated there may be more complicated restrictions – like you can’t use the software to serve people outside your company, but running on public computing is okay. She said the bottom line is you need to negotiate with your software supplier to make sure the way you are proposing to use it is acceptable.
Comment (from John O’Connel) – Representing a major software supplier he said you do have to check and there is a huge variety of software licenses but keep in mind software companies want you to keep using their software and are motivated to solving the problem, not blocking you or driving you to another implementation.
Question (Max Michel): Does Prepaid vs Postpaid matter? He said that mobile data is actually more like Pre-paid since users have limits. The real issue is latency between the applications that track and enforce usage and the network elements and making sure you can still implement the desired billing model.
This is a big collaboration including Telecom Italia, Telefonica. And Telecom
What this talk did was describe a dynamic cloud environment
in which each provider has their pieces realized as a cloud, and the solutions
are interfaced and cooperate to deliver services. He presented a 5-layer
architecture to realize this (physical resources, virtual resources, virtual
distributed execution, application components, and applications). He then talked about what enablers were
needed for dynamically configuring pieces from multiple providers, indicating
which applied to which layers. (These
were things like Dynamic discovery, composition, and resource allocation,
dynamic
(Comment – this is interesting, but looks a bit like “this is the way I’d like the world to look if we re-invented it.”)
They discovered that these characteristics existed, but they aren’t ready for production use, don’t work together, and aren’t complete. Figuring out how to really build a working system with these limitations was out of scope for the time of their project.
He shows a matrix of solutions that applied to different layers to indicate what products you could build on.
He moved on to figuring out where the Telcos should play in the cloud – what should they do. The best opportunities seem to be in providing the technical platform and networking, and perhaps in aggregating applications.
Question: You presented two layering models, one with 3 layers and one with 5. (Answer – the 3 layer picture is really a 5 layer picture – the top layer bundles applications and application components, and the picture doesn’t explicitly show physical resources.)
Question (Bernard Villian) – in the 5 layer picture you have lots of dependencies you have to work out to make it work. Is this realistic?
Question (?) – does the physical layer participate in
ICIN is focused on something that is useful. Total participants were about 160 people (that includes the ICT people, and attendees of the tutorials and workshops). Roberto gave the overall statistics on papers, demos, presentation, and then some general themes:
Lots of other things were covered – need for fair regulation – the EU is driving telecom into a commodity, but not always fairness. Energy awareness – several papers and a demo dealt with it. Identity and Security, Media Services – 70% of user generated content is produced by mobile phones, and we don’t really offer the users much to help them.
Roberto went thorough the individual sessions and the observations of the chairs. Some tidbits:
Overall:
Some new areas for possible further exploration were identified:
What we did well:
What we did not do well:
This conference is unique – most influential conference in the service layer and has the right mix of technical and strategic material. There is a vast community behind the scenes. We need to leverage it in order to promote our solutions and ideas.
“If you want more ICIN – sell it! We need to get more people.”
The session concluded with the awards for Best Paper, Best Presentation (one for each day), and Best poster/Demo, as well as honoring two long time supporters of ICIN for their service.
Stuart Sharrock closed the
conference noting that we intend to hold ICIN in
“If you want more ICIN – sell it! We need to get more people.”
The Technical Program Committee (TPC) is about 30 people who plan the conference program. Most of the session chairs and all of the paper reviewers come from the TPC.
Stuart started with a review of the registration statistics
from the past several years. While the
total is as high as it has been in years, this includes a lot of free
registrations for today, tutorial participants, and others. The number of paid registrations (about 100,
which includes committee members and authors, who pay a much reduced rates), is
the lowest in years and not enough to sustain the conference. The big problem is company policy – you can’t
travel outside national borders and/or you can’t travel at all unless you are
presenting. The issue isn’t that people
don’t want to come, they can’t. (Comment – I observed this a decade ago in
the
Roberto pointed out that there were fewer submissions than in the past, and that the session chairs have done more work than in the past to make sure that the papers we took were high quality. Stuart mentioned that there were problems with papers we accepted “on a promise” – i.e. the abstract was interesting but very incomplete. Many did not complete an acceptable paper or withdrew rather than finish. Stuart may suggest that we require final papers for submission in the future. (Comment – I agree with the problem with promises, but not the final paper requirement. The thing I would suggest is that incomplete abstracts be accepted only if a TPC member was willing to personally vouch for the completion of the paper and to track that. Some interesting work isn’t done in time for the abstract deadline, and requiring a full length paper with complete references and background will only make it harder to publish)
The keynotes are in good shape apparently except for some concerns about commercialism. Roberto is concerned about the number of session chairs who did not nominate candidates for “best paper”. Are we getting quality? (Comment – yes, I think so, but with many sessions and only 3 in each experienced session chairs are using their judgment to limit choices.)
Stuart presented the registration statistics. The striking piece is that only 16% of the delegates are normal “full fare” delegates. (Maybe smaller than that considering some of those delegates get discounts for association with IEEE or other sponsors). This is not a viable situation commercially. Stuart noted that we are not a “trade show” which is organized for getting buyers and sellers together and paid for by the sellers (exhibitors), but are an IEEE style conference where people come to share information. For IEEE conferences (the ones they run themselves) they have gone from the kind of rate structure ICIN has, to one where they charge speakers the full rate. The suggestion is that people come for the quality of the program, and that paying isn’t what’s against company policy, so people will still come. Full rate at ICIN is still roughly half that for other conferences. (Comment – I think this has a chance, but I wonder how many of the people who come, including authors, are in fact self-employed or from university programs without grant support, where the cost is a significant factor.) The proposal is also to use the structure that Stuart has in place for ICIN to organize more tutorials independent of the conference – use the ICIN community to identify presenters for tutorials.
Comment from Rebecca Copeland – She had experience with an IEEE style conference (IT oriented) that is very successful – 300-400 delegates. They were ruthless about pricing for authors – full price, and one person can’t present two papers. This was because people would send one person to present several papers to save on travel and fees. You also have to make the rule that the paper must be presented to get published, since publishing the paper is the real requirement for academic authors, not attending the conference. Stuart – the IEEE used to allow someone to present multiple papers, but they are going away from it. This is happening in ICIN (Comment – ICIN has had a history of taking place during world events that prevented travel for some – wars, epidemics, financial panics, etc. I have had to present for Lucent authors before). There was a lot of discussion on whether it is sufficient just to pay for registration or whether you have to have someone attend the conference. We had an academic viewpoint from Noel Crespi here supporting the full fare, must present model, and in fact requiring a full paper, but he thought ICIN might be an exception because the abstract model is an attraction to submitting here.
Stuart described some of the interactions he has had with authors. The IEEE already has a “blacklist” of authors caught plagiarizing, and he might want to work with others about people who submit papers and withdraw them.
Comment from Rebecca – Sometimes you need to have the paper accepted to get the travel budget (i.e. failure to attend after being accepted might be out of the author’s control and we shouldn’t punish such people) Also, she supports the “full paper” requirement as a way of avoiding the problem of unfulfilled promises.
Thomas – There has been a change of environment – ICIN used to be focused on meeting and discussion. Publication was not the key focus. For an IEEE Conference two things are different, and we have to acknowledge this if we are going to be more like an ICIN conference:
(Comment – I think this is important, because ICIN is unique in my experience in being neither an academic conference where the work is comprehensible only to active researchers, or a trade show with the focus on marketing. I believe this is highly desirable and ought to be an advantage in marketing the conference, but it apparently has been difficult to explain. I have attended many recurring conferences which started out in the ICIN model – very technical but general enough to be appreciated by everyone. As the conference and the field matured, most shifted either towards the IEEE/Academic model (very technical and narrowly specialized), or the trade show model (commercial).) He talked about the potential to divide the conference into full papers and short papers. Full papers are reviewed that way, but it is desirable to have more interactive sessions where work that is “newer” and more general is presented.
Rebecca – the problem is that ICIN is having an Identity Crisis. She feels there is value in keeping the conference more focused on strategic issues related to the services layer (i.e. not deep technical work), but use the workshops as a vehicle for detailed technical presentations like those that are the focus of IEEE conferences. To implement this each workshop would have an independent and much narrower TPC with reviewers specialized and expert in that particular field.
Comment: (David Ludlam) – we had more problems this year with papers fulfilling promises and it arose because we were taking marginal papers to insure a commercially viable conference, but the big problem is the low participation in-person in the TPC meeting which does not make it possible to have the best discussions on these. We shouldn’t have taken the risk.
Stuart: ICIN publishes papers on strategy and trends and they get published in IEEE eXplore even though they are not typical for IEEE.
Thomas: We need to know that the authors split into two groups, for some IEEE eXplore and the IEEE review process is a benefit, but for others this is a burden. If the issue is getting more people paying to go to the conference, maybe we need to create more opportunities for short contributions with less requirements from the authors. (Comment – I like on balance the suggestion of “short paper” and “full paper”, where Short papers are reviewed on abstract and not expected to be IEEE standard, while full papers are expected to be in depth)
Bernard Villian: ICIN is a business – we need to make money. Our subscriber base is authors, we need to figure out how to do exactly what the Telecom world has to do – increase subscriber base or increase “ARPU”. There are strategies (marketing, merger/acquisition, etc.).
Summary from Roberto: The core of ICIN is service layer strategy and we do not want to make this more burdensome, but there is also a desire to expand and increase quality (depth?) Workshops might be the way to do this. Quality still has to be improved (i.e. address the issue of non-completion of papers). We need to discuss this and other issues.
Best paper nominations – because of the late submission of best papers most hadn’t read them all. We had a quick one line presentation from each chair. Best papers will be done by vote Wednesday evening. There will be only one best paper awarded this year (no reflection on quality, just logistics). Roberto distributed the list of best papers with instructions that there would be 2 rounds of voting to determine the winner on Wednesday Evening.
During the lunch break I had additional discussion with
Rebecca on her comments. She was unhappy
not to receive good comments on the details in her abstract, and ultimately
took technical detail out of the paper to satisfy review comments that the
material needed to be more readable.
This is certainly the quandary for the conference – to go into depth will
basically make the material less generally readable. It also challenges the current review
process, which utilizes a panel of people who mostly do not have in depth
expertise in any particular area. (Comment – the process collects information
from each reviewer on their level of knowledge on the topic, but doesn’t use it
well. I believe that question on
reviewer expertise should determine which reviewers are used to figure the
score for technical quality, and which are used for readability and interest. I believe we could ask the same review
questions but process the data differently and issue instructions to the TPC
members that produce the right feedback.)
Here are my observations on logistics issues for next time:
Lunch on Wednesday with David Ludlam, Roberto Minerva, and Hans Stokking – Roberto described some of the difficulties he has had with TI in funding new services research. TI wants new services, but they really want people working on ones with immediate potential. They need to replace 200-400 million euros of revenue each year lost to erosion of the PSTN market.
He wanted an organization of about 200 engineers focused on
doing something new. The key here is the
feeling that they can’t succeed with a “me too” offering and would be more
successful looking at ways to enter new markets (do something disruptive.) Getting support has been impossible. At the same time Telefonica
reorganized and put 2500 people into such an organization serving Telefonica digital. Telefonica’s business in South America now exceeds that in
We also had a long discussion on one of the morning
presentations – on “sender pays” as a way of paying for mobile internet. There seemed to be a lot of opposition to
this on the grounds that the common of many users is that they are already
paying for unlimited internet service, so why can’t they just get what they pay
for? One thing I thought curious was
that this model has already been proposed for Fixed internet in the US largely
because of the characteristics of cable and other architectures with shared
access and the big surge of video both from server based businesses (e.g.
Netflix) and P2P (e.g. bit torrent).
Apparently the fixed network isn’t as big a problem in
A discussion n Privacy took places after some discussion on Facebook. Hans
indicated that the
Finally a discussion on the current financial disruptions internationally. I suggested this was in part a consequence of our industry – the digital downside – by enabling “instant” communication which enables rapid trading of everything we have encouraged gambling on economies, which has created instabilities. (No, we haven’t done it all, but I have been talking about the unintended consequences of internet and communication technologies with many people and took the opportunity to introduce this theme to get reactions). There was a lot of agreement, but most blame the governments and the greedy for opening the movements of money and propose legal solutions, not technical ones.
In another discussion we talked about the need for innovation. I claimed that what the communication industry really needs is innovation in service usability like Steve Jobs supplied via Apple. I cited other examples (spreadsheets, word processors, etc.) where a new user interface allowed a new product to capture a market by changing the way users did something in a way that made life simpler and easier for them. I claimed that a problem we often have is we are blind to this because we are so invested in the way we do things now.
Dave Ludlam suggested as a counter-example the real innovation we have seen in squeezing more bits out of copper and radio bandwidth to enable interesting new services unimaginable a decade or two ago. I claimed this is very good, but it’s fundamentally different. Squeezing more bits into the bandwidth is engineering. It’s different from inventing a whole new way to listen to music or make phone calls.
Another aspect of this discussion was on Tablets. Apple introduced the unsuccessful
Shifting gears we had some discussion on what keeps more
people from coming to ICIN and specifically the problem of travel. Many cited that airfare matters more than
hotel or registration cost, yet economically airfare is likely to be the
cheapest of these (for those in