ICIN 2012 Realizing the power of the Network

Notes by Warren Montgomery

 (wamontgomery@ieee.org)

 

ICIN 2012 took place in Berlin Germany from October 8-11, 2012.  The ICIN conference is in its 23rd year, and I have been attending it regularly for much of that history.  The conference has evolved from its beginnings as focal point for work on Intelligent Network to encompass a broad range of topics related to providing services in communication networks.  This year’s areas of focus included the impacts of cloud computing, privacy and security issues, competition between network operators and “over the top” players, and sessions on business models, service creation, and a variety of services related to public safety. For a full program and other information on ICIN, see www.icin.biz

 

Personal observations on the conference:

-          The competition between over-the-top (OTT) players and network operators continues to heat up.  The internet and the presence of over-the-top (OTT) services is redefining the role of the telecom service provider.  What will happen is not entirely clear.  My personal opinion is that the model will become that network operators will in the end provide “enhanced transit” and certain identity, billing, and security functions that are convenient to provide, but most of the vertical services will be provided competitively.  There are opportunities for everyone, but threats to business models that don’t acknowledge the new structure of networks.

-          Privacy, Security and the enormous growth of data in the network are becoming major issues that we haven’t yet adequately addressed.  Governments are beginning to regulate in ways that impact architecture choices.  Users aren’t well educated in how to use the tools we give them.  Regulation has so far failed to provide adequate answers to these issues.

-          Migration of services to the “cloud” is unstoppable.  Many industries are realizing the advantages of making use of rented computing cycles and software in cloud implementations.  In spite of concerns raised about privacy and security, and suitability for “mission critical” services, the economics are compelling for many applications.  Network operators figure into this trend both as consumers (i.e. migrating their operations to cloud based implementations.) and providers (using their assets to offer cloud services.)

-          Networks can play a major role in public safety.  The 2011 earthquake and Tsunami in Japan was one trigger to look at how networks can enhance public safety, and several innovative applications were shown.

-          Machine-to-Machine communication (or the Internet of Things) presents new challenges to network providers.   The connection of large numbers of non-human endpoints (e.g. sensors, controls, etc.) to networks requires changes to many assumptions about how networks are used today, and will require creative solutions to issues like identifying and authorizing endpoints on networks, routing and traffic handling, and others.

-          The conference brought into clear focus the sharp differences between the Telco world and the internet world:

o        Telco’s are software consumers, and have an interest in standardizing it.  Internet application vendors are software providers, and have an interest in maintaining a proprietary advantage in their products.

o       In the voice world, Telcos were the users of the network and had an interest in standardizing it.  In the internet, the “users of the network are the software companies who are largely not involved in network standards.

 

In short, in spite of lighter attendance (most likely due to a combination of the economy and changes to the conference’s policies on paper submission and registration fee discounts) this conference remains a vibrant place to go to learn the latest trends in our hyper-connected world.  What follows is a discussion of the sessions that I attended during the week.

 

Tutorial: Towards a converged M2M service platform (Fraunhofer Fokus)

 

I attended parts of this tutorial on the Future Internet presented by a team from Fraunhofer Fokus.  The tutorial covered the forces driving the evolution of the internet and key work in standards and architecture aimed at addressing the new needs these trends are driving.  Those trends include:

 

-          The growth of the internet beyond academic and entertainment only uses towards critical infrastructure. 

-          The “Smart Cities” concept, where the goal is to support the projected migration of more than 70% of the world population towards cities. This is particularly true in emerging countries, where these cities are being built from scratch and there is an opportunity to design the infrastructure with the capabilities of universal communication in mind (e.g. networked sensors and actuators and applications that take advantage of it to support more efficient transportation, health care delivery, government, etc.)

-          The growth of Machine-to-Machine communications

 

He showed a projection of the market for “Smart Cities” over the next decade. (Comment – I was surprised to see this total only $18 Billion and have the shape of a market that matures over that period.  I would have expected a larger total market and one that continues to grow rapidly long after 2020.)  He showed a value chain for this and the positioning of the network operators of today largely just in the access/connectivity.  Operators are looking to move up in the value chain, but there is intense competition for that business.

 

He showed a case study for the city of Barcelona, which architected services around a common data repository.)  

 

This tutorial positioned the evolution based on the evolution of IMS to serve the future internet.  (Comment – this is no surprise, since it has been their focus of research and prototyping, but I always wonder whether this vision will really serve the players looking at the applications layers today based on today’s internet.  The telecom industry has a long history of designing well thought out communication infrastructures which the data/internet players largely ignore, because they have already designed around “dumb” networking technologies.  (Look at ISDN, ATM, and IMS, all of which were aimed at serving data based applications and largely bypassed))

 

The most important characteristic of the future internet and in particular smart cities is the move from Human to Human communication to Machine to Machine.  Human to Human communication is largely session based and in repeating patterns (point-to-point, multicast, etc.).  Machine-to-Machine is largely transaction based, which may range from a few bytes over an uplink periodically, to massive downloads.  Facilitating this either requires supporting a massive number of long lived sessions, or a datagram oriented structure, which is not what today’s networks are optimized for.  (Comment – the other difference I see is that humans adapt to networking problems (e.g. redialing a failed call), while machines cannot be relied on to do so (e.g. either not retrying an error, or retrying too many times and/or too rapidly for the network))  They have demonstrated that IMS on Evolved Packet Core can serve as a platform for this, but they are looking beyond that at what is really needed.

 

There are many standards for platforms for smart cities based on the SDP approach (OMA NGSI, GSMA OneAPI, RCS APIs), and evolving APIS for data access. 

 

Historically the people working on the applications have worked independent of the people doing the network (“what do you need?  -- just connectivity!”)  He overviewed a set of EU initiatives on “Smart” industries using communication to optimize various industries, with a potential impact of 600 Billion Euros (Comment – this is more the magnitude I would have expected for impact)  Example – networking waste bins to tell the waste collector when they are full and then optimize waste pickup)  (Comment – yes, but do you also need to determine when the waste is beginning to smell J)  He talked about several pilot projects being developed in these programs, including one that will take place in the near future in Berlin.

 

Next, they looked at the evolution of M2M.  Today M2M is largely about dumb things (e.g. sensors) to application running on central servers.  He drew a distinction between M2M and “Internet of Things”, in that in IOT devices are passive (like RFID tags that need to be read), while in M2M everything is potentially active.  (e.g. a refrigerator might report status, but might also decide to re-order food).  (Comment – the thing that came immediately to mind here is the concept of “Actors”, which is an old abstraction from the area of computer science where Artificial Intelligence meets Operating Systems needed to support smart devices.)

 

M2M critical capabilities include huge scale and the ability scale, heterogeneous traffic (session vs. datagram, multiple media with different error/performance requirements, etc.), “invisibility”, “criticality”, and “intrusiveness” (This involves the interaction of M2M with social networking and all the consequent privacy and security implications)

 

He gave many examples of M2M including public safety and automation with the message that it is extremely varied and the needs are very broad.  He talked about the need to study the possibilities (“Not all the things we can do from a technology perspective are good”)

 

He gave the analogy of a smart city to a human body, with the internet of things as it nervous system.  There is a lot that goes on between “intelligence” and “raw data”.  (Comment – interesting.  I expect a biologist would agree, since one of the interesting things about the nervous system is that a lot of signals are pre-processed by it before they reach the brain.)

 

Data from Cisco – by 2016 there will be as much M2M traffic in the internet as there is total traffic today: Exabytes per month  (Comment -- I expect that the figures may be even more dramatic if you look at sessions or datagrams, since I expect more of the human/today traffic involves long lived sessions streaming video).

 

He talked about the most important industries and the top 3 as Transportation, Utilities, and Automotive. (Comment – I think this was from a survey and is really the 3 most interested in it)  Utilities, automotive and transport show up as the highest growth rate, but Consumer Electronics and healthcare show up in the top 3 by revenue (Comment – I expect consumer electronics includes all that revenue from media streaming and downloads)

 

He showed a view of today’s M2M world with lots of standards, vendor association, device associations, etc.  He talked about a new initiate, OneM2M based as an initiative for standardizing a common M2M service layer, which had a recent organizational meeting.  The idea is not to replace existing standard organizations, but to allow them to partner.  7 organizations involved (ETSI is one, OMA is another, but there are many in this list I did not recognize).

 

He talked about some working groups in this area, for example the IETF Constrained Resource Environment (limited packet sizes, capabilities, etc.) Constrained Application Protocol is designed to operate in constrained IP environments using RESTful APIs. 

 

He talked about huge variation in time vs. usage patterns, from smart phones which communicate on a second by second basis to “eCall”, an automated report of an vehicle crash, which has to function correctly but may occur only once in many years.  (Comment – a traditional problem in high availability systems is how to insure error detection and recovery work even when they are needed only very rarely.  I expect the M2M world will be full of this kind of system which presents many challenges in making it work reliably)

 

He overviewed various standards activities that fall under the umbrella of OneM2M, including the OMA’s Lightweight M2M service enabler Overview, an initiative in progress to define APIs related to this, as well as 3GPP initiatives in this area.  He also described an ITU-T M2M activity in an M2M Focus group that is being driven by e-health activities.  (Comment – my jet lag prevented me from taking detailed notes on all these efforts, the bottom line is that this is no different from other new areas, where every existing standards group that can expand its scope to cover it has initiatives that are working on it, and the challenge is to figure out how to harmonize all of them.

 

Question:  What’s the bottom line – which of these is really going to dominate?  Answer – ETSI has the most established work in this area and we can’t afford to wait – IOT is already here and the ETSI standards can serve it.

 

The next part of the tutorial was on the ETSI TC M2M Standardization effort, that is the most mature of those in the area.  He gave a matrix showing the major areas needing standardization and what standards efforts are working on them.  One of the aspects of this is to look at the vertical market segments (eHealth, Smart meetings, automotive, smart grid, etc.) and map these into efforts that address the common needs.

 

He gave an example for the connected Home:  There is a M2M gateway with the Gateway Services Capability layer, and on the network side a Network Services Capability Layer which communicate via any intervening network.  (Comment – this seems to imply a client/server rather than a peer-to-peer kind of model.  It is useful to have standards for how to access the many “dumb” endpoints in a home, but I wonder if the model is in fact limiting, since I expect many machine endpoints to be both smart and free standing)

 

The standard is structured so that not all service capabilities are mandatory, and there are provisions for interworking (e.g. working with devices that do not implement ETSI standards through a gateway.  There is also a standard data model which organizes data in a resource tree to simplify access.  The focus is really standardizing the organization and “syntax”, not necessarily the semantics (e.g. provide a standard way of discovering what data is available from a set of sensors but it doesn’t mandate how, say, temperature or pressure are delivered by the device.). 

 

The next segment covered the Fraunhofer Fokus efforts and the available tools for this area.  They have recently re-organized to have over 500 researchers working on this and related areas, with efforts around various vertical applications as well as the underlying technologies.  He highlighted two “playgrounds” (their word for test beds I think) – one around the Open SOA Telco Playground, which is designed to support a variety of vertical application areas on top of IMS/EPC and a variety of access and transport technologies.)

 

The remainder went through the details of the toolkit and what it provided.  A very comprehensive set of capabilities including interworking with non-standard devices was presented.  (Comment – this looks like a very nice prototyping toolkit for testing out services or infrastructure pieces designed to work in a standardized M2M environment.)

 

They are working to become the reference model implementation for ETSI in this area.  He gave a release schedule, which is working towards having Android based implementations of a lot of the interfaces by spring 2013.  Some of the other advantages they cited for their platform work included low cost (because it’s research vs. commercial focus, and their relationships with Universities and others in the community that result in building in a lot of work from the best research in the industry.

 

He gave some slide based Demos of the capabilities.  One was on SIP based M2M communication.  They use laptops to implement the M2M gateway and the service side gateway, and interface to various smart devices in the home (controllable power plugs and sensors).  There were scenarios that involved sensing when the user left home, alerting the user about devices left on, and allowing the user to remotely shut off power to them.  Also, remote management of a fan to cool a home before the user arrives home.)

 

He talked about some extensions where for example you would be able to get some kind of control application from a 3rd party who could use those APIs to automate the management of your home.

 

The second demo used an Android phone as the user device and allowed remote control of the home.  (Comments – on the one hand there’s nothing new here, someone I worked with built a remote home management application using ancient ASCII terminals, ISDN, and power line networking in the home in 1978, but it is timely.  In the US utilities are beginning to push solutions like this and actually willing to subsidize it with an important addition – the Utility gets to shut off high utilization appliances during times when power demand exceeds capacity (though the user can generally override this))

 

The summary here is that while a lot of operators are still trying to decide strategy for how to deploy 3rd party APIs in their networks, we are already in the Internet of things, and the need for Smart Cities, so there is an opportunity to move rapidly into new application areas with the potential for high demand and high revenue.  ETSI Standards are the most mature to address these areas, and they have been designed to be easily extended to incorporate new technologies and interwork with non-standard elements, and Fraunhofer Fokus has an effective prototyping platform.  They believe this is at least a 10 year focus for the industry in developing and harmonizing effective standards. 

 

Finally they gave a plug for their conference in Berlin November 15/16.  (Comment – would it help ICIN to coordinate with this?  I believe we did this once with their IMS forum, but as noted elsewhere I’m not sure how much it helps ICIN to coordinate with “like” activities that largely draw the same audience.  This was the experience we ICIN was coordinated with the Fraunhofer FOKUS event before..)

 

Keynote Session

Stuart Sharrock – conference chair.

Stuart opened the conference quoting a piece from Einstien, who would send out the same set of questions every year – ICIN has also been asking the same questions every year about services and network.  Einstein’s insight can be applied here as well: What is interesting is not the questions, but the answers, which change every year, and in our case the answers are getting clearer as we learn more about how to do it.  He acknowledged our sponsors and overviewed the receptions and other aspects of the conference.  He also announced that Bruno Chatras will be chairing ICIN 2013.

 

(Comment – I was actually pleasantly surprised by the size of the audience in the room.  While it is many of the same people who come year after year there are new faces and we are still a substantial community representing a broad set of interests, countries, and companies.)

 

Stuart introduced Hagen Hultzsch from Deutsche Telekom labs (T-Labs) as a sponsor to say a few words about their involvement.  Hagen is now retired, but has been active with the conference for a long time and has been on the board of directors for DT.  He talked a bit about the evolution of the conference from focus on Telecom towards focus on Internet and the debate going on in the TPC on renaming the conference (something like Internet Conference on Intelligence in Networks).  (Comment – the conference name is actually a very significant issue, since the conference is almost 25 years old and ICIN has been the brand name for all that time, but the problem is that for some people that brand has become dated, and even if we re-assign the letters in the acronym, I have heard people who translate it as “I See IN” and reject that it can be relevant to the new world of internet services.)

 

Stuart recognized Igor Faynberg, Alcatel-Lucent, who is this year’s TPC chair.  Igor has been going to ICIN since 1992.  He talked a bit about the long term evolution of the conference from IN to IMS, peer-to-peer, internet services, cloud computing, and other areas. 

 

An Opportunity for All to Co-Operate (Luis Jorge Romero – Director-general ETSI)

 

Standards are all about Cooperating, and Cooperating is a fundamental driver of evolution – it’s how we have managed to be where we are today.  He started with something basic – Shannon’s law, which related channel capacity to bandwidth and signal to noise.  Our industry is all about how to improve that.  (Comment – I know this is a metaphor, but ICIN is much more about increasing the value of each bit than the number of bits you can push through the pipe.)  He stated another networking law, that the value of the network is proportional to the square of the number of nodes, hence increasing the number of connected endpoints has dramatically increased the value of networking.  “6 degrees of separation” states that you can reach anyone else in the world through a maximum of 6 links.  (Comment – I’ve always found this interesting.  Much of the early work in informal networking was indeed about discovering how to get to someone and empirically proving this relationship.  What I find interesting is that we are still talking about that even after adding many orders of magnitudes of nodes.)

 

He talked about social networking and how this follows the same laws of value and separation.  (Comment – he admitted, like one of the Fraunhofer speakers, that he does not use Facebook, but acknowledged it’s important and in particular it’s important to have multiple social networks)  He gave some very interesting historic data about the growth of SMS in Spain, in one month the number of messages quadrupled, and the only thing that changed was that their two local operators suddenly allowed interworking.  (Comment.  I agree with this and have seen data to support that, but just about every commercial operator of a proprietary communication service has taken a contradictory approach – instead of interworking with their competitors they avoid that and instead attempt to force users to choose their service over the competitor so they alone get the dramatic increase in business.  We saw that with Instant Messaging 10-15 years ago and are reliving that history with social networks today)  He did point out that social networks today would not exist if the standards weren’t in place to allow interworking of access and message delivery that they ride on.

 

Now we are taking a big step from person-to-person to machine-to-machine communication, and that come from multiple sectors (i.e. different vertical applications will place different demands on networks and different requirements for standards.)  One thing he emphasized was that in the voice world, the Telcos were the users of the network standards and they were all at the standards table, but in the machine-to-machine world, providers of applications are the real users for the standards, and they have to be brought into the process to succeed.  (Comment – I believe this is actually a very perceptive observation of why things are so different in the internet world.  Indeed, traditionally network operators were the stakeholders for standards and when it was just voice they could adequately represent the needs of the human user, but in the data world there are a huge number of application builders and users whose interests must be represented, but they haven’t been part of ETSI and other standards communities aimed at the Telco world.  It’s not clear to me how to address this since there are way too many players to have them all be active, but this is indeed likely why there are so many industry associations that attempt to represent the interests of these players and they often wind up working different agendas from the traditional standards community.

 

Igor Faynberg asked a question about participation regionally.  He cited a recent example where the US Government has decided they need to be able to realize the economies of cloud computing and have started paying a lot more attention to the standards related to this as a result.  He cited a couple of other examples from Europe where governments have asked ETSI to coordinate standards around cloud computing to find out what is available, where the gaps are, and develop standards where needed.

 

Roberto Minerva cited that the biggest issue with standards has been speed!  What can we do to speed up the standards process.  Response:  There are two factors, one is the process, and this falls on the Secretariat’s shoulders – they need processes that are fair and that takes time but they will work to improve speed within those constraints.  The second issue is the need for participants to achieve consensus, and as the stakes have gotten higher for companies who have big opportunities if their version of the standard is approved this has become increasingly difficult.  He has no magic to suggest there is a faster way to do this.

 

Question/ Comment – Today’s hot applications may not be based on standards at the application level, but they do rest on standards below them for access and distribution and networking.  (Comment – I think the real issues are business strategy and “design for interworking”.  The business strategy issue is that innovators need to be thinking that in a mature industry they will have competitors for their service and they will recognize more value by interworking with them than trying to dominate them.  “Design for Interworking” is really about how you architect your new system with the realization up front of the fact that you will be interworking with similar products.  I believe in fact that this mindset has a lot of value even if the company winds up being dominant, because if nothing else they will need to interwork users/customers who have the latest version of the product with those who have not yet updated, and this is much easier if that is the assumed operating mode from the start)

 

Removing the Fog from the Clouds (Markus Hoffmann, head of Bell Labs Research)

There is a lot of hype around cloud computing, but there are good reasons for the hype – we really do have applications for which there are huge variations in the server power required, and cloud computing is of real value in managing that.)  He gave an example of a video service which saw demand go from one server to almost 5,000 in 3 days.  They did this because they were early adopters of cloud computing and were able to rent that capacity rather than actually buy and install that much computing.  (The explosive growth was due to the addition of Facebook subscribers, and was not anticipated.)

 

He said the vision we ought to have is that computing and data could come from anywhere, not just big data centers.  Today’s cloud computing is much more narrowly focused:  Transaction oriented, stateless, and relaxed time constraints.  What he is looking for is including services that are stateful/session oriented and have strict timing requirements.

 

They took some measurements of performance with today’s cloud computing.  Latency in a very simple (write data) application was scattered over a long period, with an average of 500ms, but with outliers that were much longer.  This is good enough for transaction oriented web services, but it won’t due for services with real time needs.  (Comment – one interesting characterization of real-time systems is that they have to be optimized to reduce “worst case” performance, vs. others where we optimize average performance and accept that there will be outliers.)

 

Their interest in this is whether they could produce a cloud based wireless infrastructure.  He showed the bell labs wireless cube, a small (hand sized) building block for wireless access that they feel will solve a lot of problems of where to put the infrastructure (“Not in my Backyard”).  To exploit this, they are talking about bringing a lot of the functions that have traditionally been deployed in the periphery with the cell sites back into the network (in the cloud), but it won’t work with the kind of performance seen for cloud computing in support of transaction services.  His view of how to do this is to migrate away from a data center based view of services where they all run in the big room full of server racks to where they can be more widely distributed and as a result closer to the point of use.  (Comment – this is interesting, in that I expect that this will optimize average performance, but won’t solve the outlier problems.  Consider what happens when there is a focused load on one part of the mobile network that exceeds the capacity of locally deployed servers to handle tasks like registration, hand over, and session control, forcing these to be picked up by servers in distant sites.  This sounds nice, but it may indeed cause a spike in response times at exactly the time when you don’t have the performance.)

 

He talked about their efforts in standards, in particular the IETF ALTO group which they co-chair.  (Comment – yes, this is consistent with a paper and demonstration they have in the conference)  He made an important distinction – you have to standardize the interfaces, you don’t want to standardize algorithms (e.g. how services are distributed).  That allows the vendors to each have their own “secret sauce”.

 

Having a world where servers are broadly distributed raises many questions, like security – you can’t achieve security simply by putting up a big firewall around your data center if the servers performing your services are scattered all over the world including some in places that are impossible to secure.

 

Questions (Warren Montgomery) – Won’t having the resources distributed worldwide make it even harder to achieve good “worst case” performance?   (I gave an example based on the recent Ryder Cup golf tournament where 50,000 people, most of them with smartphones, were trying to use them in a physical area of less than one square mile.  The result had to overload the capacity of local mobile networks severely)  His answer was really that their solution will do better than traditional cloud computing approaches in addressing this because the resources needed can be “close enough” to achieve good performance.  I believe what is really going on is that traditional cloud computing is not designed for real time performance at all, but their implementation is for servers that will be designed to deliver it.  The networking transit time is probably not the big issue causing the variance, so this will work.

 

Mobile Network and Cloud Services in Disaster Recovery (Minoru Etoh, Senior Vice President NTT Docomo). 

 

He started with a video and description of the March 2011 earthquake and Tsunami in Japan.  It was interesting footage.  He showed a video of a government meeting in Tokyo during the event.  Japan had good warning systems that alerted them to the shockwave in advance and even the potential for the Tsunami, but grossly underestimated the height of the Tsunami.  In addition to footage of the approaching water, some of which I hadn’t seen, he had many “after” pictures marked with the height of the Tsunami and its impact on NTT.  19,000 people killed, most of them remain missing, but 400,000 people were successfully evacuated.  (Comment – this is actually a very interesting statistic, because I am sure you would find a much higher rate of casualties for past major activities like this without the warning technology.  Because of the extreme consequences of the event we often focus on what went wrong, but it is important to remember how much was actually right and prevented a much worse loss of life.)

 

He showed a day by day evolution of the communications patterns in Japan, and there is a very noticeable impact on the coastal region of Japan.  It’s not an immediate effect, because most of the mobile network survived the event except on the immediate coast, while over time the batteries in areas that lost power ran out.  It took 10 days to restore power to the infrastructure that survived.  Restoring the lost infrastructure in the coastal network took a month and a half.  260 million Euros were lost in the event to NTT.  4,900 base stations were lost.  (He showed a map of where they were and it was interesting because many were not on the coast and they were not all contiguous.  I suspect the reason is that the earthquake also caused damage and that varied from place to place based on the local design of the site and the severity of the ground motion in that location).  They have spent 200 million Euro on new disaster preparedness measures:

 

-          Large zone base stations (104 locations(30M)

-          UPS with 24 hour autonomous power supply in 1700 stations (140M)

 

The large zone base stations are used only for backup and allow them to restore service to a large area with minimal infrastructure.  I believe the large zone stations also allowed them to include more robustness measures, like redundant backhaul, more robust earthquake survivability physical design, and more robust power.

 

They used more satellite and microwave backhaul too.  They had abandoned microwave in the past, but the beauty of both is they do not require fragile fixed infrastructure.  They also have a fleet of mobile base stations in vehicles which can be deployed to increase service in areas which have lost it. 

 

They can now serve 60+% of their customers for 24 hours without local power.   One thing they have been developing is solar recharged base stations (Green base stations).  (Comment – I assume this is cost prohibitive to do more broadly, and I wonder whether the solar infrastructure survives a major earthquake)

 

He gave a simple review of earthquake early warning based on detecting the P (pressure) wave, which travels faster than the S (Shear) waves which are the ones that cause most of the damage (Comment – this is established practice, my wife is a geologist and as a result I am familiar with earthquake dynamics and detection methods).  He talked a bit about what is the harder problem – getting the information on an impending earthquake to the right place at the right time.  (Comment – the fact that you only have 5-10 seconds between the P and S waves may not sound like enough, but that is sufficient to put systems into a “safe” state so that sudden failure through ground shaking does not cause excessive damage, and this has been successful in such areas as avoiding trapping people in elevators and shutting down rotating machinery which may be dangerous if suddenly shaken)

 

He showed a video in which they used smartphones to warn people of an impending quake, with a unique warning tone to tell people “it’s time to get shelter”.

 

For future work he presented a short film which used the mobile registration with local base stations to infer population statistics about the area – demographics, movement, density, etc.  This was called “Mobile Spatial Statistics”.  (Comment – I wonder how much is needed in assumptions about user behavior to make this work, and how stable those assumptions are.  Mobile usage patterns are highly variable, and the inferred population may look quite different if usage was monitored on one of those days where certain groups were heavy users of their phones.)

 

He showed a set of slides showing the inferred population in Tokyo areas over a typical day, which showed spikes in the central business district during working hours and then an ebbing.  He suggested that this would be valuable for evacuation.  (Comment – yes, but his example showed precisely the effect I wondered about – part of the reason for those spikes is no doubt that people use their mobiles more during some parts of the day than others.)

 

One final comment about the Tsunami was that a lot of people lost pictures and other critical personal data and that this motivated them to enter the “cloud storage” business for their customers to avoid that kind of problem in the future.

 

Question (Igor Faynberg):  What additional Standards are needed in messaging for alerts to citizens? (IETF is working this apparently).  In Japan today they have standards that allow TV broadcasting to be interrupted for warnings.  What actually happened at the time of the Tsunami was in fact a surge of SMS traffic as individuals warned their friends. 

 

Question (Chet McQuaide) Chet asked whether they interacted with first responders to determine whether personal identification information in the mobile network would be useful to them in locating victims in a disaster.  Answer – theoretically they can do that, but it is difficult today and they aren’t doing it.  (Comment – I believe Chet worked on something like this a year ago.  It strikes me that if you could have a database relating user information to mobile device identity it would be very useful in figuring out which people might be trapped.  His response though was that Japan has very strict privacy laws that prevent this.)

 

Question (Hui-lan Lu) – Is the large base station concept is only for backup?  Why doesn’t this make them more vulnerable to large scale outages?  His response wasn’t really clear.  I think the answer is really “it’s a lot better than no backup at all”.

 

Question   Isn’t one of the problems after a disaster that get swamped with non-essential traffic?  Answer – yes, one of the problems they had with 60% of the network working is that those people were flooding the access into the non-working part of the network with non-essential traffic.  (He wanted to shut off access to “Youtube”, but they couldn’t do that due to regulation)  Someone else suggested concepts like reducing the voice codec rate or blocking or lowering the coding rate for video to extend the available bandwidth.  (Comment – this would probably help, but I expect that “service infrastructure capacity” is at least as much of an issue as bandwidth.  It does no good to allow more calls in the available bandwidth if you don’t have the call processing capacity to complete them.

 

Demonstration Digests (Warren Montgomery, Insight Research)

 

I gave an overview of the 4 presentations in the poster/demo session this year and the two sponsor’s presentations.  The details are covered in the notes on the demonstration session later on.

Making Sense of a changing world Berlin Style (Christoph Raethke, Founder – Berlin Startup Academy)

 

This session featured innovators from the Berlin ICT community,  and has been a feature of ICIN since it moved to Berlin in 2010.  As an overview Christoph introduced a little about what has changed since the last year.  Berlin has been recognized in some areas as the most innovative startup city in Europe.  One thing that has happened since last year is that they have much more startup money available.  It isn’t like the 1999 startup boom, but they are seeing funding of 100,000 Euros available for entrepreneurs. 

 

In the past there have been some failed experiments with Telco and Telco vendors trying to fund and work with startups.  That didn’t work well because those companies were used to working with big players who could afford the overheads of working with a big company and not interested in small innovators.  That’s changing, and Deutsche Telekom and Alcatel-Lucent have both been more active working with entrepreneurs. 

 

A lot of the issue is educating entrepreneurs on avoiding big mistakes – working with big companies, complying with expectations of regulators, governments, etc., avoiding big business mistakes (too much publicity, not enough publicity, not enough feedback).  The interesting thing about this event for them is having an opportunity for entrepreneurs to meet the “big company” community and for each side to learn from the other.

 

Favor.IT (David Federhen)

 

(Comment – It is interesting that I don’t think any of the speakers in this session put their names on their title slides, so I wound up looking them up in the program and may be wrong if there were any last minute substitutions.  The culture of the startup academy is clearly one that is focused on the businesses, not on the personalities.)

 

He started by commenting that this is probably the youngest startup on the agenda today.  Their concept is a social networking service connecting users and businesses that improves on Foursquare, the dominant player in this field today. 

Lessons for Foursquare – their App lets users “check in” to locations and let their friends know what they feel about the place.  The problem is that it has no inherent value and it has been difficult to monetize.  Another issue for Foursquare is that they did not have an exit strategy.  They could be acquired by another player like Facebook, but Facebook chose to field its own similar offering.  Favor.it aimed to avoid these mistakes.

 

“Mobile Ads are Undervalued”.   He presented an interesting chart of how much money was spent on advertising media versus how much time people spend on that media.  Print is a standout high (more ad money than time), but mobile is stand out low (almost no money spent on it but users spend a lot of time on it.

 

What is the business model for online advertising?  Online to Offline Commerce!  (e.g. Groupon), where online you get coupons that drive business for brick and mortar business.)  90% of commerce is off-line, but 50% of offline purchases are influenced online.  40% of mobile search refers to local places, products or services.

 

Favor.it is a platform that helps the user connect to local businesses.  (80% of the time when you enter a business it’s one you already know and like, they want to take this to the next level by allowing the business to reach you better.)  (Comment – personally I’m drowning in unwanted advertising from businesses I have frequented both online and offline, and not really looking for more of it, so I really wonder about this).  Favor.it allows you to favor, share, and recommend places to friends, stay in touch with the business, access targeted offers, and use the App to make purchases, earn rewards, etc.

 

What he described was a smartphone app that does all this and operates quite smoothly.  In addition the app allows the business to track their success in reaching the customer and influencing customer behavior.  Part of the idea is to give the business the same capabilities and support they would get from a traditional advertising agency with dedicated staff to support them.  (Comment – this sounds quite interesting to the business, but I really wonder about consumer overload.)  A lot of the value to the business is basically giving them a toolkit to easily go after mobile on-line business without having to put all these things together for themselves.  One customer is using this to sell tickets on-line, which is a much better alternative to them than other approaches.  They are open to 3rd party connections (he showed connections to Foursquare, Pinterest, and others.  He closed with his contact information: david@favor.it

 

Skobbler (Phillip Kandal)

Skobbler is all about maps and applications.  The consider themselves to be  The red hat of Navigation”.    It’s an open source model for location and navigation.  It’s the number one paid navigation service  in 9 countries (including Germany, UK, Russia).  They have more than 3 million users.  They have 70+ people in Germany and Romania, and break even since Q4 2011.  They started inside of the company Navigon, where the founders of Skobbler developed the concept and bought the rights to take it from Navigon.  (Navigon was later acquired by Garmin).  Why Romania – “We can’t make much money in Apps, so we   have to economize because we can’t afford to hire the more expensive Berlin developers”

 

Google is a major competitor, but Google doesn’t work well with enterprises because they aren’t willing to customize for them.  Other competitors include MapBox and Waze.

 

It’s a layered approach with an open source map code (like Linux), which they package Skobbler GeOS API and SDK, like Red Hat does for Linux.  The end-user app costs 1.59 Euro 

 

Work for the future – they need to get into more countries.  Maps aren’t like other apps in that you need to understand how users use them in each country and adapt to expectations.  They are working with lots of partners who want to be independent of Google (Comment – interesting, I guess people don’t trust working with a big company like Google which competes with them in other areas)

 

Echofy.me  (Alexander Oelling) – “the right recommendation at your fingertips”

 

The idea for Echofy.me is to provide recommendations that are based on location and the user’s needs.    Solutions today in this space include Google maps, Amazon Recommendations, and Siri on Iphones.  Echofy.me looks at patterns with the goal of answering life’s little questions.  (Comment – This is interesting, but I suspect that it’s easy to cross the line to “creepy”.  An app like that might notice I commonly go to Subway for lunch when I travel, and it would be useful for an app to tell me where the nearest one is, but I think I’d find it creepy to have that recommendation made autonomously without my asking the question.)

 

The business model for this is that it’s free to the end user, but there’s a charge to businesses to promote themselves to the user.  (Comment – well, the problem with this is that the user will rapidly learn not all businesses “participate”, and as a result may mistrust any recommendations.  How often do users really visit those “sponsored links” that appear on Google search results?)

 

This is work in progress.  They have a proof of concept app, they are working on APIs for integrated, and working on ways to monetize it as described above. 

 

What he learned from the startup

 

-          Start with the solution – know what you want and combine technologies that are already there.  His first startup started from scratch and spent too much re-inventing technologies which instead could have been adapted and re-used.

-          Build scenarios, not features – start small.

-          Collect data and push data in the right context.  Algorithms are important.

-          Let the team grow incrementally.  Don’t push people into the team because they are stars.  (Comment – this I think is a big advantage of startups, they can afford to recruit people who fit both the product vision and the company culture, rather than being forced to assemble a team from people who are there.)

-          Network, Network, Network.

-          Motivation – you need to be visionary and motivate the product, and it’s all about the business model – you have to know how you will make money.

 

Barcoo (Benjamin Thym)

 

Barcoo is smartphone service that assists shoppers in “brick and mortar” stores.

They have  > 7 million downloads, all from the German speaking market.  They are the number 4 app in this market.  The app basically allows you to scan a bar code on a product and get information on it – both generic information on the product and information on where it can be bought, including pricing user opinions, policies, etc.  (Comment – yes, I can see where this is really interesting).  One of the things they do is the Food traffic light, which shows you the good and bad things about a product (like nutrition, sustainability, rating,  CO2 footprint, etc.).  One thing they did was enter recall information so you could find out whether this particular batch was subject to it.  In another example they give you real-time video of the entire production process for this product (e.g. show you the hen house an egg was produced in).  He suggested that while providing this kind of transparency is painful to companies, they will do it if consumers begin to demand that.  Even without that, “Crowdsourcing” can supply images of things relevant to the product.  (Comment – yes, but how do you make money off of this?)

He gave an example from Coke.  Coke is locally bottled in Berlin and proud of how they do it, but few consumers will go to the web site to learn about it.  By putting the information into Barcoo they got 130,000 consumers to get that information by scanning Coke bottles!  Another example of what the scan could give you is – competitor’s information.  When you scan Nutella, you get information on a Kraft Philadelphia based product with half the fat and calories. 

 

He also talked about getting information from alternative vendors when a consumer seems to be looking for service on a product and not getting help.  (Comment – this is really interesting, but a bit too reminiscent of the malware that you would occasionally see on the internet which would direct you to a competitor’s website whenever you visited some stores.  I still avoid a certain travel site which leaves a “pop under” window offering an alternative way to look for hotels or flights every time I use them, since I don’t know whether that’s a deliberate part of their own site or the result of some kind of malware infection)

Redefine the possible, Login Berlin  (Speaker from the Berlin Government)

Berlin is a premier IT location, it’s not just startups, but established IT players who are here.  It is the premier location for games and mobile internet companies in Germany.  He gave a lot of statistics about the Berlin region business community (unfortunately the slide presentation didn’t work for him so I couldn’t capture all the details as easily)

 

The evening reception offered opportunities to meet with the Berlin entrepreneurs.  I talked to a couple of them, but there wasn’t much new to me from those discussions.  I was struck by the fact that so many of the companies were working in the space of actively linking consumers to businesses using user profiling and “push” technologies.  As someone who probably deleted at least a dozen marketing emails during the afternoon of the conference I wonder how effective this really is as consumers are increasingly saturated in recommendations and chatty contacts from businesses.  Personally I have reached the point of avoiding giving contact information or if possible identifying information to businesses unless I really want to have a long term relationship (I travel extensively and when I travel visit restaurants and play golf courses that all want to establish some kind of relationship even though most of these places I will visit at most once a year and often never again). 

 

Session 1 – Cloud Opportunities for Telcos (Dan Fahrman, Ericsson, Chair)

 

Dan introduced the session by noting that Telcos actually pioneered some aspects of “Cloud” services, in introducing Centrex and Intelligent network in the 1960’s and 1980’s.  Now that cloud services are exploding there are many opportunities.  (He cited an internal Ericsson study saying that the total market in 2016 for just services was 160 Billion dollars, with the operator addressable market being 23 Billion dollars.)

 

Applying Flexibility in Scale-out-based Web Cloud to Future Telecommunications Session Control Systems  (Hideo Nishimura, NTT)

 

His presentation was basically on how to replace high availability systems using dedicated active/standby servers with cloud implementations making use of distributed (N+K) Redundancy.  (Comment – Déjà vu!  In the mid 1990s one project I was involved in at what was then Lucent Technologies was an N+K redundant services platform that was used in Lucent’s IN product line and some wireless servers.  We called the concept reliable clustered computing, with many of the same objectives and characteristics of today’s cloud systems.)

 

He went through the requirements for platforms and how they are effective served by active/standby systems, but these are expensive (50% of the servers are idle), and vulnerable to double failures.  They also require rapid repair to avoid double failures.  (Comment – another interesting problem is that because the standby side isn’t active, these systems need diagnostics and extensive maintenance to insure that recovery will work properly when needed.)  Active/Standby systems also have to be over-engineered to meet peak loads, further increasing the cost since ordinarily most resources are idle. 

 

The Scale Out architecture uses a cluster of servers that can be allocated to different functions in the network (e.g. CCSF and Application Server).  He went through a replication architecture in which each service would replicate state on another server.  Each physical server may be both active and standby, carrying the active service for some services and the standby for others.  (This is one way to insure that all servers are periodically exercised and ready to take over during failures.)

 

Another aspect of this solution is that the total performance can be scaled by adding servers – you don’t have to increase the power of each server.  (Comment – one thing he isn’t saying is that the application must be designed without the need for a global shared memory so that it can be distributed across servers connected only by much slower networking.  This is what proves hard in applying this approach to many)

 

One advantage of this approach is the ability to launch small trial services with the ability to rapidly scale up (Yes, again, you have to be careful on how to design the service to avoid any assumptions of a central implementation)

 

He talked about the ability to re-allocated capacity from low priority to emergency services during some kind of emergency, like the Japan earthquake. 

 

He talked briefly about the load distribution problem and how they solved it with hashing (this is the problem of figuring out which processor in a cluster gets a new user request.) He also gave some brief charts on scaling performance. 

 

The concluding viewgraphs make clear that this was a prototype, and they have issues like the physical design of servers and networking for a carrier environment to solve before deploying it.  They also feel then need more study of the frequency of state updates to standby servers in trading off performance against the impact of failure (less frequent updating gives higher performance, but means more state is lost during failures.)

 

Question:  Have you studied what happens when the server resource pool available varies as well as in systems with virtualized servers?  (I think the answer was not yet.)

 

Question:  Tried to clarify the primary/secondary operation and that each user would be allocated to a primary server in a fixed (The answer wasn’t clear).  The questioner suggested that performance could be improved by distributing on a request by request basis instead of user by user.

 

Question:  How do you define urgent services that get priority in case of failure?

The answer seemed to be that they were manually defined by the operator, which is what I would expect.

 

Towards a CAPEX-free Service Delivery Platform (Michelle Stecca, Paulo Pagiletto, University of Genoa (also an author from Telecom Italia))

 

The goal here was how to move all the service required in network operations into the cloud so that nothing needs to be dedicated to deliver new services, eliminating the need for Capital Expenditures.  Some approaches include open source software, and cloud computing.  This paper examines the characteristics of these approaches.

 

Cloud Computing can be deployed in two stages, one virtualizing the service delivery platforms using private or public clouds.  A further stage is also virtualizing the users’ terminals with equipment in the cloud as well.

 

The pros are obvious and well understood in marketing, but we need to go further and look at the risks:

-          Lock in with vendors

-          Licensing problems (e.g. Oracle)

-          Data privacy (In the EU they have strict rules about sensitive data, which prevents data storage in blacklisted countries.  Unfortunately the US is black listed! And most cloud vendors are based in the US).

-          Software/Hardware specific requirements.

 

There are a lot of choices for cloud providers and it’s important to make the right choices. (Comment – I agree with this, but it runs very much against the hype about clouds that they are generic and can insulate you from this kind of thing.)

 

Open source for the SDP means getting a lot of the infrastructure for a distributed component architecture needed from open source projects (Asterisk, Open IMS Core, Application Servers, etc.)  There are a lot of advantages including cost and mitigation of vendor lock in (because these packages run across multiple hardware vendors and Operating Systems), and time to market.  Unfortunately there are also risks – lack of support, licensing terms, and Telco requirements for software quality control. (Comment – Telco procurement for software has in the past made it difficult to incorporate open source because of extensive testing and certification requirements and requirements on version control and license terms.)

 

He gave an architecture that used some open source elements as a load balancing front end to cloud based virtual machines that hosted the services.  A lot of the work talked about how to scale up or scale down the number of virtual machines. 

 

His conclusions include that you can take advantage of these approaches to turn Capex into Opex, but it’s not yet possible to get Capex requirements to zero.

 

Question:  Are Telcos ready for this given that they are increasingly reducing their engineering staffs and putting more responsibility for that on Vendors?  Answer – no, they aren’t.  Telco experience with open source is low at the moment.  Roberto Minerva (Telecom Italia), who had made the comment about operators moving away from being technology experts commented that many have the capabilities of dealing with open source, but only for limited functions, not the whole stack.  (Comment – I think the real answer here is for a vendor to put together a whole package for an operator, not for the operator to have to assemble it all for themselves.)

 

Followup question – Are operators ready for the risks?  The TI author addressed this saying that this is future work for them, Operators need to look at Open Source more seriously.  He cited recent news in the US that there is a lot of concern about Chinese products in critical operations because they are not sure about the software in those products and whether it meets reliability and security requirements.  (I am not aware of this, but it is certainly a valid concern)  Michele cited an example of an existence proof – A silicon valley startup which does Telecom with open source.

 

Question:  Someone still needs Capex to produce the real servers needed, is the real benefit just the ability to dynamically distribute the server load and share the physical servers (independent of whether they are in an external cloud or not?  (There wasn’t time for a complete response)

Monitoring and Abstraction for Networked Clouds (Michael Scharf and others from Alcatel-Lucent)

 

(This was a demonstration submission that the TPC requested be expanded into a full paper and presentation.  There is also a demonstration associated with it).

 

For carrier services, a centralized cloud isn’t sufficient because of the need to reduce latency with servers close to users.  You still have the need to scale, so what you wind up with is a Network cloud, with distributed servers, and the need for active management of where services are delivered and of the networking that connects them to meet the performance goals.

 

What is network awareness?  Given a source of resources in a network, what is the best candidate?  Resource placement must be done for many different kinds of things.  Doing this right requires an interface between the application and the infrastructure. IETF has standardized such an interface as ALTO (Application Layer Traffic Optimization).  What this does is expose important characteristics of the network to the application so the application can optimize its performance.  This has been going on since 2008.

 

He gave a simple example of ALTO, which can be used to create a map of the network and show a cost metric of communicating between different nodes.  ALTO is a finished protocol but implementations are still new.  (ALU has been a major contributor and has one of the better implementations in terms of standards conformance and interworking).

 

The basic idea is to use the map provided by Alto to optimize the communication in the application.  There are tradeoffs in engineering between the amount of network detail exposed by Alto – more may help improve allocation, but at some point becomes overwhelming. 

 

Question:  How do you avoid having not enough resources?  Answer – over-provisioning.

Question:  Is ALTO implemented by commercial providers?  No, it’s not in use yet as far as he knows.  For one thing there aren’t a lot of distributed clouds today.  Lots of router vendors have put ALTO on their roadmaps, but nobody is using it yet.

Question/Clarification:  A Deutsche Telekom attendee said there were no products in this space yet, but many people are looking at it.

Office on Demand: New Cloud Service Platform for Carriers (Kentaro Shimizu, NTT)

Small and Medium Businesses want to prepare for an ICT infrastructure quickly without having to deploy a lot of systems.  Office on demand is aimed at meeting that need with a cloud based solution that takes advantage of fiber optic networking to the business.

 

The architecture uses a hybrid office gateway as an intermediary between the user’s terminals and network access that can be fiber or mobile, then a set of services in the cloud that provide various services to the business (customer relationship management, etc.)  The services all make extensive use of commercial software and cloud computing so can be easily and inexpensively introduced.

 

The HOG provides service discovery and orchestration that connects users to specific services in the cloud. 

 

He showed a video demonstration.  (Comment – one advantage of the wide availability of video now is that it really helps understand the presentations of authors are not native speakers of English).  The video showed a bunch of call scenarios and how each one accessed services in the cloud to perform call control.  These included internal and external calls, IVR, and Voicemail, typical small business services.

 

He showed the architecture of their prototype that was used to produce the services demonstrated in the service.  It included a hosted PBX (computer controlled switching system I think) as well as prototypes of the various services and servers.)

 

(Comment – one thing making all the papers in this session hard to understand was extensive use of unusual acronyms.  For example “Caas”.  A quick network search turned up dozens of expansions, including several that were relevant to cloud computing.  We need better approaches to naming.)

 

Question (Rebecca Copeland) – isn’t this re-inventing IPBX and IMS with a web client?  Rebecca said she demonstrated all these features using IMS 5 years ago and wasn’t sure what’s new.  He said the ability to host multiple SMBs was the new part, but Rebecca disagreed – said that all this was demonstrated for IMS 5 years ago.  The point being that rather than re-inventing how to do voice services we should be looking at data services.  (Comment – When I read this paper and when I looked at the presentation for the first time I was expecting more packaged value added services as part of the offering, not just voice services.)

 

Question (Igor Faynberg) – Igor said he did see new concepts in this, one question he asked was whether the HOG would provide protocol interface translation such as between a web browser and IMS based services or does the terminal need to implement the required network interfaces directly?  Answer:  Not clear, Igor took it off line.

 

Question  The previous talk proposed a Capex free architecture.  Do you need a physical HOG, or can that be moved either back into the cloud or as software on the end user device? (I couldn’t understand the answer).

 

Question (Max Michelle, Orange):  Really an observation on what is different – the ability to do software as a service too, which wasn’t part of IMS.  Orange is looking at this as well.

Session 2 Telcos and OTT (Chair Chet McQuaide)

Chet is retired from Bell Labs and BellSouth and now and independent consultant.  He proposed the problem as a differential equation stating that data traffic is growing more rapidly than data revenue and the trend is accelerating.  Costs increase with the growth of traffic, so over time profits decrease.  The challenge is figuring out how to get revenue to address this.

Toward a New Telco Role in Future Content Distribution Services (Ghida Ibrahim, Telecom Paris Tech and Orange Labs)

The connected world is no longer all about voice, we now have many kinds of content.  One problem is that there are many new players who control and provide content (Facebook, Netflix, etc.)  The paper addresses what the role of the Telco should be in delivering content.

 

Today there is an end to end value chain that includes content producers, Content providers (Youtube), CDN providers (Akamai), network providers, and users.

 

Today content providers typically have a centralized distribution point and contract with several CDN (Content Delivery Network) providers who provide distributed caching of content near the points of access to improve performance and reduce the load on the provider’s servers.

 

Content demand is increasing rapidly, and more of it is going to mobile.  User generated content is becoming important.  Much content is non linear (I believe this means not simple streams). 

 

There are many factors in evolution including massive number of new nodes (IOT), and new technology and organizational approaches to content.  There are new requirements coming from mobility.  The result is a rapid increase in complexity of content distribution and it is increasingly unlikely that single players will be able to address it all – a federation of players is required, and the Telco is a natural participant in such a federation.

 

Telcos are today the network provider.  They have tried to become content providers and some have tried to get into the CDN space.  In the future most are looking at providing cloud services.  She is proposing that the Telco move to a Business-to-Business role combining Network and CDN functions that have synergy where they can partner with others for content distribution and other functions.

 

She gave an inventory of Telco assets that may be valuable in working with others, including ownership of user information, network monitoring, the user’s context, and extensive network management and control.

 

The Network Management and Control function includes things like caching and mobility handling which are critical to mobile content delivery. 

 

A lot of this falls on the control plane as the main place that interfaces the network to 3rd party provided capabilities.  She went through the entities of the control plane (from the IMS reference model)  She then went through the details of what relevant services to content distribution each of the control plane entities could provide and what the limitations where  (Way too numerous to summarize here.  The paper has a lot of detail on this.  The real goal is to figure out what entities can be enhanced to allow the Telco to be a significant player in content distribution for more than just the network.

 

Value Added Mobile Broadband Services Kris Kimbler, K2K Interactive, presented by Roberto Minerva

 

(Kris has a long record of participating in ICIN but unfortunately ran into some kind of passport/Visa problem that prevented his attendance at the last minute. 

 

Mobile broadband is growing very rapidly, He said one of the implications that is becoming visible is that more and more access is via “Apps” on smartphones vs. websites.  Roberto pointed out that HTML5 on mobile devices is aimed at restoring the value of open websites in mobile broadband.  (Comments – I agree this is a big trend – the web has an open interface to the browser allowing application integration and re-use at the endpoint.  The App world can and does use proprietary connections to the “cloud” providing the data which limits access to the information.)

 

Another chart showed the growth of mobile access versus desktop access in India with a near term crossover.  Smartphones generate 35 times as much mobile data traffic as “dumb” phones (150Mb/Month vs 4MB). 

 

Another interesting claim is that Apps on the phone are replacing SMS, which is a problem for operators since they charge more per byte for SMS vs. raw data traffic.  (Comment – this is also true if you could do OTT Voice over mobile data.  The essential problem here seems to be that network operators have been able to charge a premium for “bits” that represent certain services, but the IP world is making this more and more difficult.)

 

Smartphone introduction accelerated mobile broadband in North America and Europe to grow closer to Japan, which is still the market leader. 

Mobile data is growing exponentially (Again the paper provides lots of data), but operator revenue isn’t keeping up.  Operators have responded by evolving their data plans:

 

-          phase 1: Unlimited data plan

-          phase 2: Volume based plans

-          phase 3: Value based plans (data cost depends on what you access)

 

Roberto disagreed a bit, stating that the evolution depends on the country.  Italy started in Phase 3 and has evolved the other way.  (Comment – in the US different carriers differentiate themselves through their strategies in this area)

 

More statistics showed that 60% of users had no idea what their data limit was and 75% had no awareness of their usage.  Consumers don’t want to pay more for data, leaving “Phase 3” as the best way for operators to solve their revenue problem.  This involves differential charges for different web access (e.g. more to access Facebook than Google?)  Another potential is value added services around mobile data, like security against lost phones, parental controls, etc.

 

Another idea is limiting bandwidth with the ability of the user to dynamically add speed on demand at a cost.  Roberto disagrees that customers would do this but suggested that perhaps providers of network services would be  interested in doing this and bundling the cost in the cost of their service.  (e.g. you get higher bandwidth to stream a movie from Netflix and pay for that as part of your cost for Netflix.)

 

Another idea – bundling access to particular applications with mobile data offerings (e.g. you get access to certain services only if you pay for a premium data service.) 

 

Another idea – time based charging – you pay more during peak hours for bandwidth, like you used to pay more for calls from fixed phones. 

 

The Application Ecosystem in LTE Networks (Thomas Schimper, Nokia Siemens Networks)

He started by defining applications for his presentation.  For him, “application” means something that delivers value to and end user (not billing, monitoring, OSS functions, etc.)  All have some device part that provides the user interface, and some involve a Network part provided by the Telco, while others are peer to peer or peer to 3rd party that just use the access and packet core routing independent of any value added by the Telco.

 

For the Telco, the subscriber is a direct customer and is linked to the Telco through “locked” hardware (e.g. the SIM card forces you to get certain services from the Telco.)  In the OTT world the access to the subscriber is indirect, there is an open market for apps, and each feature or service can be independently purchased.  (Comment – maybe, but I think there are still natural clusters that are hard to break apart since the interact with the same abstractions in the network, like call control)

He showed pictures of both architectures which highlight the open interface to the OTT world that gives you the “Open Market” approach to services.  The danger for the “Classical Operator” is that they easily become an Access only provider if they cannot compete effectively for the value added services that the user can get elsewhere in an open model.

 

The real problem is the inverted pyramid.  Most of the cost of the network operator is in the network access, less in core network and very little in applications, while the revenue picture is exactly opposite (most in applications, less in core network, and very little in access.  (Comment – isn’t this really just a billing problem?  Could an operator be viable charging a significant rate for access and less for network and applications?  I suspect this could work simply because Access is not an area where there is great competition.  A user in a certain physical area has limited choices for access)

 

There is a lot of stuff in the “Customer Experience” that the operator participates in, including the characteristics of the end-user device, QoS of access, etc.  One issue is that things interact – you can reduce network cost by using an OTT voice service like Skype, but it reduces battery life in the device. (presumably because data transmission requires more power in the device than voice transmission).  The user might not like this tradeoff.  Worse yet, the user doesn’t know who to blame when it happens and might blame the operator (especially if the battery life is a more significant issue in some networks than others because the OTT architecture is data intensive and might exaggerate the impact of coverage issues.)

 

How does the operator respond?  One strategy is to compete against the OTT:

-          Ask for money to enable OTT apps

-          Deny access (No skype by blocking ports)

-          Reduce access (give the user poor performance when using OTT apps).

-          Offer competitive functionality (e.g. RCS)

 

Another possible response: -- Cooperative approaches:

-          Build revenue and cost sharing models

-          Including OTT provider functionality in bundles

-          Control the OTT provider in the network via collaboration.

 

This might allow the operator to be competitive with the OTT world (Comment – maybe, but it seems like there will be such a rapid evolution of OTT services that it will be hard to keep up)

 

Conclusion – technology is forcing opening of applications, it can’t just be stopped, and major revenue streams get redirected into new players.  Operators could fight it, but we should learn from the music industry (i.e. iTunes was a much better response to MP3 music than simply trying to ban file sharing via legal actions or intrusive copy protection.)  A collaboration model could retain revenue streams and allow it to stay competitive.

 

(Comment – One of the interesting things here is that historically, “Telco services” have been slow growth compared to all the other things that come in the data world.  We used to say we wished we could get customers to spend on Telephony what they would spend on innovation.)

 

Sun Tzu’s Art of War for Telecom (Hans Stokking, TNO)

Sun Tzu was a Chinese general who wrote a book on the art of war 2500 years ago.  The book does have relevance to military strategy, but it also is about different philosophies of life.  This is where it becomes relevant to Telecom.

 

Hans passed out little response pads for everyone to participate in a voting exercise during the coffee breaks.  We used them to vote on several questions:

-          Do you recognize at least two of these people?  (They were CEOs of computer/software companies)

-          Do you recognize these people (CEOs of telecom companies)

-          What makes a good presentation?

-          What do you expect of this presentation?

-          What’s your view of OTTs vs Telecom.

 

What makes a good leader?  It has to be the right person in the industry.  98% of the people in the room answered yes to knowing at least two of the Software CEOs.  Part of this is that they are leaders, and they are founders.  The big question is how the companies will fare when these people depart.

 

Only 12% of people recognized at least two of the Telecom CEOs.  The proposed reason is that they are managers, not leaders.  They are not founders, they occupy the chair for a while, and unless they really become known for leading innovation personally there is no reason why anyone outside of the company would know what they look like.

 

What makes a good presenter?  67% said one with a passion for the material, 30% said the material had to be interesting, and 5% said the presenter had to be entertaining.

He presented some material from Sun Tzu about the role of emotions – emotions put people into a position where they will react predictably to certain stresses (i.e. do anything to stay alive) and this exposes you to exploitation by your enemy.  You need to understand what your emotional biases are and instead of making an emotional response realize that you have a choice.

 

What are your expectations for the presentation?  53% were open, 37% thought it was interesting, 7% thought it would be entertainment.

 

How do you view OTTs – 61% thought they were an opportunity for partnering to get into new businesses, 24% thought they were an opportunity for driving more traffic, only small percentages that saw them as a threat.  (Comment – I expect this is not really reflective of a Telco management audience.  Most, if not all of the people here come from R&D, and R&D people are trained to see opportunities and not worried about threats to the business.

 

Conclusions

-          a leader determines the “faith” of a company

-          Emotions get in the way of the right action.

 

The paper describes more topics – appraisals, and how to gather information, looking at the whole for getting to win-win, and the concept of  Shih” -- getting and maintaining initiative. “You have to be good at what you do otherwise you have no right to be in business, but you won’t win a war because you have a good army – you have to have something extra to get an advantage for your side.”  (One thing he cited was that Telcos were typically very open because they determine things through consensus standards, while OTTs are secretive and develop their initiative in secret and launch it into the market – nobody saw the iPhone before it was out there.

 

Question:  The Incubator idea is a great idea, but it’s also an old one (just clarification as to what T-systems had already done in this area).

 

Question (for the Kimbler paper) – how do you rationalize the “value pricing” concept with Network Neutrality.  Roberto tried to answer acknowledging the point.  He felt that even if they were allowed to do this it would destroy trust of the end user – users migrate to operators which don’t block traffic or charge extra for access to popular sites.  (Comment – yes, and this is also why we have so many “unlimited data” plans – users like them and as long as anyone will offer it someone else will go there.)

Hans answered saying that you can offer whatever you want as long as you are transparent about it.  You could offer a cheap plan that does not offer access to Facebook.  (Comment – I’m not sure this would really avoid the Network Neutrality issue, which assumes the access provider is neutral to content)

 

Question (Stuart Sharrock) – does the answer on how we view OTT really represent denial instead of openness?  This is a disruptive technology transition, and one characteristic of disruptive technologies is that they tend to usher in a whole new set of players who become the new dominant players.  He feels we may be seeing this happening in Telco, like the Automobile changed the players in transportation.  The Telco world has a set of dominant players (Telcos) who are “Frand” players (Fair Reasonable, And Non-Discriminatory) The new players (GAFA – Google, Apple, Facebook, Amazon).  Some would add Microsoft.  These players have a very different culture – it’s winner take all.  It’s all based in California.  There are big cultural differences.  His claim was that if you ask someone outside our field who the players in Telecom are they are more likely to name a GAFA player than a Telecom company. 

Hans – don’t you still need access?  Yes, but it’s invisible. 

Stuart – Telco operates in a regulated world, GAFA is unregulated and unregulatable.  Thomas --  Why don’t we regulate Skype?  Stuart – felt there was no way to do it.  Stuart talked about working in an area that looked at sexual services over the phone.  The companies in this industry demanded the best technology, and they worked actively to avoid regulation by migrating into countries which wouldn’t bother them.  (Comment – yes, “phone sex” companies had the most sophisticated phone services and indeed used them most effectively to avoid regulation by national governments in my experience as well).  As another example, the British Royal family has sued newspapers who published nude photos of them, but they haven’t sued YouTube because they haven’t figured out how to do it.

 

Question/Comment – Isn’t one of the issues that the OTTs are uncooperative, even with each other – no common interfaces, no interworking, each wants to own the world. 

 

Question (Chet)  -- is there a forum where OTT players and Telcos can meet to, say figure out standards for open access to Telco assets like HSS?  Answer (Ghida Ibrahim) – it’s needed, this is a natural area of cooperation, but OTT players may not be aware of this need now.  (Comment -- It is notable that she did not propose a forum where this is happening, which I believe is the real point behind Chet’s question.)  She proposed 3GPP, IETF, and one other as a possibility, but I still think the issue may be that the OTT players’ culture is not to do things like this.  Roberto said that there are operators proposing to do something like this (“Network Virtualization Forum”).  (Again, I think the real issue is that the OTT players aren’t looking for it.)

 

Roberto challenged the definition of “OTT” – they are at the edges, we are in the middle.  Google has framed this so that they have the terminal (Android) and the services, and hide the middle from the end user.  (Comment – this is exactly what happened in the ISP world, isn’t it)

 

The real business of Apple is selling “closed” boxes.  Amazon is doing the same thing (Kindle)  60% of applications don’t break even.

 

Question – Someone took exception with the characterization of network elements as black boxes or protocols as not open used in the first presentation.  His point was Skype is closed, SIP is Open, all the Telco functional elements are well specified with open interfaces.  The speaker clarified that she challenged SIP as open, because it has a limited community.  (Comment – this is fascinating.  SIP was a response from people in the IETF world to the Telco solution which is H.323!  We have apparently forgotten this history!)  Her comment on black boxes was more that while these functions are well defined there’s no API that allows you to build directly on them.

 

Question (ETSI Keynote Speaker)  He commented on an ETSI effort on Media and questioned the Telco community on the observation that other people outside of Telco have a hard time getting into that and similar ETSI efforts – they don’t see us as open and ready to share.  That perception is not a new one.  (Chet reviewed the history of IN and AIN as an example).

 

Comments (Rebecca) – She indicated a way to get open access to the PCRF which will be described in her paper.  Based on her role in Connect World she made an observation that the operator’s view of the situation is changing and we may see new efforts.

 

Roberto’s comment – Even if there are interfaces to our internal network “boxes”, there is mistrust about our ability to change it.  “If Google was going to create IMS, they would have done it on their own”.  Google et al are software builders, Telecom are software buyers.  (Comment – this is very interesting, and one of those key points in why Telco operates by consensus standards and the internet players operate by taking their best shot and hoping to dominate their competition.)

Session 3:  M2M and the Internet of Things  (Stffan Uellner, T-Systems Chair)

 

Design of RESTful APIs for M2M services (Asma Elmangoush, TU Berlin)

 

What does M2M mean? – it is a paradigm in which the end-to-end communication is executed with no human intervention.  Motivations include increasing automation in daily life, and a big increase of the number of mobile non-human endpoints.

 

She put up a slide showing industry participants in standardization, vendors, and operators, each a very big group of major players.  The Market forecast (Cisco) was for traffic from M2M to equal or exceed today’s total network  by 2016. 

 

This isn’t new, today we have many domains in which there is an established way of communicating with machine endpoints at a distance, but they are not integrated.  She described this as the intra-net of things.  Where we want to go is to unify this so the same means can be used across domains which results in economies of scale and development and also allows new possibilities in integrating the capabilities of endpoints that today are viewed in different domains.  

 

The proposed solution was an IMS-like middle ware layer that serves as an intermediary between applications and networks, but is oriented towards the needs of M2M communication.  (Comment – she also has an association with Fraunhofer Fokus, which has co-authors on the paper, and this is their approach as presented in Monday’s tutorial.)

 

Their work is based on the FOKUS broker and their tools.  They addressed this area by adding APIs to the broker to address machine-to-machine.  They chose to use RESTful style for data exchange as the basis of APIs, because it fit the machine-to-machine paradigm very well.  The APIs include:

 

-          Open MTC Network, which handles overall connection.

-          Device APIs, which handle finding devices and capabilities through discovery, managing groups of devices, executing commands on devices, and subscribing and unsubscribing to notifications from the device.

-          Data APIs – these cover getting data from devices, making it available to other applications, and controlling the automatic feeding of data from devices to applications.

 

She gave a use case scenario which involved collection of data from sensors in the home.  The model would be that the sensors would be set up to automatically feed data to the M2M core, and then the broker could be used to access and evaluate the relevant data.  (Comment – this is very much a push model for the data.  I wonder if this creates problems in that a lot of data that is being pushed into the core may never be used.)

M2M is gaining interest and essential to the Internet of things.  Application Platforms and APIS need to be foundations.  The Proposed APIs provide application developers with a foundation for handling the “CRUD” operations.

 

Question (Hui-Lan Liu) – When you design a Restful API do you have any assumptions about the nature and constraints of the device (e.g. limited battery, limited connectivity, etc.)  Answer – Their APIs are at a higher level as an organization for what has already been specified by ETSI in terms of details of how to access the device (Comment – I think this probably touched on the item I mentioned above – with some devices you may not want a model where the device is assumed to be always on or always connected, but I don’t think their work really addressed how those kinds of limitations are factored into the design)

 

Evolution of SIM provisioning towards a flexible MCIM Provisioning in M2M vertical Industries Harald Bender, Gerald Lehman, Nokia Siemens

 

He started by looking at the difference between subscribers and devices.  Devices are long lived, and generally managed by companies that deploy the application.

 

Ideally the majority of new devices that require connectivity will support mobile connectivity in the future.  This requires that the activation of subscription information must be easy and fast, and the management required for these devices must be low.  At the same time we need to maintain network security at a high level.  Average Revenue per device will be much lower than for human subscribers.  (Comment – interesting.  I had never thought about a machine endpoint as a mobile network subscriber, but he’s right, today’s definition of “subscriber” from a management and revenue perspective will not work for simple devices with the occasional need to communicate.)

 

The UICC dilemma – UICC is a single integrated circuit card which holds SIM, subscriber information, and profile.  With M2M we want to be able to switch mobile network operators because it’s possible that the device will outlive the mobile operator, and almost certain that pricing and competition will make it desirable to change.  The problem is that in order to do that today you have to switch out the SIM card, which requires a physical intervention and is therefore very expensive.  (100 Euros per device, probably because of the need to send someone out to find the sensor and do the update.)  There are proposed solutions based on download of credentials over mobile bearer, but that creates security problems and isn’t a favored solution.

 

Business challenges – low “ARPD” (revenue per device), and the change of UICC (SIM Cards) during lifetime isn’t feasible.  The solution to these problems has to preserve security (prevent fraud), but at the same time allow free choice of operator when the device is activated and remote update to operator subscriptions over time.

 

A Non-removable UICC will be default for M2M simply because it reduces costs in “cheap” endpoints.  Devices are unattended, which makes security even more important, and this requires significant changes to the provisioning process.

 

The solution uses 3 concepts – handover of credentials, embedded UICC, and a revised concept.  Key features needed are:

-          A shielded, protected and trusted area at the device for hosting credentials. 

-          A central entity hosting credentials (outside the device and addressable from anywhere). 

-          A small circle of trust

-          A delayed download of the credentials which allows subscription change in the field.

 

Handover of credentials can be accomplished by having the two MNOs cooperate and trust each other during the handover.  There is another possibility of pre-provisioning the device with a list of credentials and instructing the device to go to the next set, then having the new serving MNO get the information from a central repository. In this case everyone has to trust the repository.  This solution doesn’t do any over-the-air download of credentials. (Comment – if there is a limited name space for credentials it will also exhaust it a lot faster since each new device will probably get a lot of them pre-provisioned “just in case”)

 

There is a GSMA proposal for this (Embedded UICC Proposal), details of which are in the talk.  The solution uses a new central entity “Subscription manager”  The concern is that in reality there will not be a single subscription manager, but instead a set of them, possibly a large one, and they have to cooperate and trust each other for this to work.

 

One question is who takes responsibility for mobile network access when the devices is produced but won’t immediately be put into service.  The answer may be to give the device owner more responsibility for this (Comment this works when the devices are owned by a company which is managing, maybe not when the device will be sold directly to the consumer for self-install.)  This approach has the potential to reduce the administrative load on the mobile network operator.

 

His conclusion is that the handover of credentials is a stopgap approach, that embedded MCIM  and giving the owner more responsibility will be better long term.  This is a critical issue for the success of M2M in the mobile domain.

 

M2M and Internet of Things – Towards Halos Networks (Antonio Manzalini, Telecom Italia)

 

The work is motivated by the view of what happens when you have a huge number of machine endpoints at the edge of the networks.  He likened them to ants – simple organisms that exhibit complex behaviors.  “The network disappears into the ecosystem”.

 

Imagine a rainy day.  Initially wet regions are isolated, and progressively wet regions become connected and we can find a wet path interconnecting them.  This is the vision of halos networks – incremental growth of connectivity.  A halo is a subset of a network centered around something (WiFi, kiosks, people, etc.)  When a critical density of halos is achieved, network connectivity for all endpoints spontaneously emerges.

Intel has just unveiled a WiFi sliver of silicon that can be embedded in any microprocessor to WiFi enable potentially anything.  (Comment – yes, and part of the problem of our industry is that they just did this and didn’t have to worry about security, billing, subscription, provisioning, etc.)  Lots of other trends like Smartphones, cheap sensors.

 

He sees this as two emergent trends:  Internet of Things (lots of things communicating), and Internet with Things (lots of things being mirrored by virtual entities in “the cloud”, that makes them accessible to people through the internet).    This is a complex adaptive system (CAS), and typical CAS have phenomena like self-organization and occurrence of phase transitions where there is sudden spontaneous change of behavior from one state to another.  Examples include a crowd of people, a nest of ants or bees, or the human brain.

 

Question:  Are we reaching a singularity where technological change becomes so rapid and the impact so deep that human life is forever changed?  Some of the evidence for this is looking at the growth of processing and connectivity.   Both cross a threshold where capacity rises extremely rapidly (fully connected networks happen suddenly and then give you greatly increased processing capacity.)

 

What is needed?  Smartphones, WiFi Hot Spots, and lots of tiny PCs (Raspberry Pi costs only $20 and has all you need).  There are biological analogs of a lot of the behavior of networks like this.  Different networks can learn about each other’s resources.  Cooperation is very important in making networks like this work. 

 

But, WiFi isn’t well suited to support this kind of spontaneous organization because of the way the MAC layer works.

 

He outlined two strategies:

-          Blue Ocean strategy – focus on new customers

-          Red Ocean strategy – focus on existing customers. 

 

Blue Ocean has more promise to grow the network.  Some examples:

-          City operating system that gathers information from censors and uses actuators to accomplish some action (e.g. synchronizing traffic control or public transit?)

 

His conclusions included that value is moving from networks to terminals, services are provided at the edge of the network, pursuing only a traditional approach and adopting a walled garden approach is detrimental in the long run. This is the motivation for non-traditional approaches to Internet of Things – Operators become providers, but IOT can’t be managed the way we manage today’s endpoints.  Self organization makes life easier (lots of other conclusions I was having trouble fully understanding.)

 

 

Question (to the whole session)  Can we learn from work on mesh networks?  Answer – yes, (I think).

 

Question to speaker on RESTful APIs – what are some examples?  They have done some example applications that show the use of the platform, but their work is on the platform not the applications.

 

Question – what are the real use cases for which you are looking at Halos that demonstrate what they can do that other approaches won’t?  He gave a scenario where you could assemble a supercomputer from 64 Raspberry Pi devices quite easily, imagine scattering a lot of these around the network and allowing them to communicate, you might wind up with the ability to support large amounts of “supercomputing on demand”.  (Comment – I still have trouble really seeing this in concrete terms.) 

 

Question for the NSN speaker on M2M provisioning – is it realistic to have the authentication key shared from one operator to another?  Answer – it’s clearly possible technically, but not realistic because it doesn’t fit the operator’s business models or practices to support it.

 

Question (Dan Fahrman) How do self organizing networks handle security?  He answered with an analogy to Ants defending their nest and said the big change is the huge number of nodes.  (Comment – again, it’s hard to be concrete here, but I can actually see problems with the “ant” analogy – it is entirely possible to hijack the ant’s self defense processes with chemicals or ants that have been fooled.  This is all much easier with technological entities that are more easily changed than biological entities. I think relying too much on nature analogies might be dangerous because our technology doesn’t replicate all the characteristics of nature.)

 

Partner Keynotes (Igor Faynberg, Alcatel-Lucent Chair)

Self Organizing in Heterogenous Wireless Networks (Henning Sanneck, Nokia Siemens Networks).

 

He began with a vision of networks in 2020, which was a need for a thousand times more capacity (10x spectral efficiency, 10x in spectrum, and 10x in base stations), 10 times the users, gigabit speeds, and other huge scale factors.  (Comment – I wasn’t in the room when he covered this slide first, and I don’t know what he really is referring to in increasing spectral efficiency, because I don’t believe there are technologies out there with the potential to do this.)  Use cases can’t be used to completely model the needs because we can’t support them.  Against that he looked at the forest of technologies being deployed and the fact that they won’t all go away in that time frame (LTE, HSPA, GSM, Femtocells, WiFi).  (Comment – Wimax didn’t appear on the chart nor as far as I can tell on any else’s, nor did “white space” technologies.  I don’t know whether this is just that the people at ICIN don’t focus on these technologies or whether these technologies aren’t seen as serious players.) Management becomes a serious challenge because each radio access technology has multiple layers, and ever part needs to be administered. 

 

Another aspect of the network is that it is multi-vendor.  It used to be that vendors dominated in geographic regions, but now every part of the network becomes multi-vendor which means a much greater need for inter-workability both in function and in management.

 

Multi-architecture is another aspect.  Some pieces are centralized with processing brought into the cloud, while others are physically distributed.  “Densification” and higher distribution brings complexity.  This drives up operating expense.  This is the motivation for self-organizing networks to reduce the complexity of managing this environment.  He contrasted today’s cellular network which tends to be single vendor, single operator, tightly planned and automated, with the ad hoc mesh network as described by Halo networks. They are working on incrementally applying self organizing principles to the mobile environment.

 

Example was Automatic Neighbour Relations.  Today you plan site coverage, then manually plan and lay out which nodes wind up where and the information they need to know about each other.  They propose doing that automatically, where the sites discover their adjacency and back haul relationships. 

 

He showed a set of processes for managing networks with the idea that configuration, optimization, and healing of failures could be done by self organizing elements.  There is still a need for human interaction and configuration, which they label “SON Coordination”, and it manages these 3 self operating elements.  This is all under development at some level, not existing products, a vision and a partial product set.  He went through all the elements as to what is required and what it would do at a high level. 

 

His conclusions included that mobile data traffic explosion is driving increasing capacity into operator networks.  This happens against a background of decreasing revenue per users and demands lower cost management.  Self Organizing Networks are an approach to drive down costs.  (Comment – this is certainly a nice concept, but my experience in the past as always been that the largest  costs for Telecom Operators are  in transitioning the way they manage their networks to new techniques, both in system changes required and in people/process changes.  The result has been a large and growing list of legacy systems new elements in any network will need to interface with and hopefully not disturb.  This isn’t a sustainable approach, but I don’t know that we have reached the point yet where network operators are open to rethinking their operations in radical ways to result in lower cost operations “in the future”, even if it means a large cost in adopting the new approach.)

 

OSS and BSS: Driving Radical Change (Beauford Atwater, Ericsson)

Beauford really comes from what was Telcordia, the US based common R&D organization created to server the regional bell operating companies following the 1984 AT&T divestiture.  Telcordia evolved from central planning and research to dominantly a vendor of software, primarily OSS/BSS, for telecom, before being acquired by Ericsson).

 

 The real focus is not radical change of OSS/BSS, but the role of OSS/BSS in handling radical change (I think).  50 billion connected devices by 2020 (Ericsson Vision). We get there in 3 waves:

 

-          Networked Consumer Electronics

-          Networked vertical markets (e.g. automotive)

-          Networked everything (crossing the verticals)

 

In his view Consumer is actually behind the verticals, because “wired buildings” preceded it.  Beau is a runner and demonstrated all the elements he runs with that are connected in a body area network (sensors for everything including heart rate, steps taken, etc.)  The “Networked Everything” stage is when you start to see emergent behavior since you have the opportunity for things to interact in unplanned ways.  (Comment – yes, this looks like the communication equivalent of what we called “mash ups” in the web world, when you had everything accessible people put the information and capabilities together in unanticipated ways.)

 

He reviewed the history of OSS/BSS.  He talked about planning processes in telcos.  A base station earns $160K/month (in some company), but that company takes 16 months from the time they decide they want one to actually getting it installed and working.  Imagine the impact on the bottom line if we could shrink that interval. 

 

Another example: “trouble-to-resolution”.  This used to be a linear process where troubles were logged, analyzed and isolated.  Then repair was dispatched, repair was performed, verified, and resolved.  This process represents 35% of the operators cost.  Telcordia saved BellSouth a large amount of money ($350M is I think what I heard) with a small investment in dispatch scheduling ($10M).

 

You have to look at the whole picture.  It does no good to have a 98% automated recovery rate that fixes something quickly if the 2% that miss take weeks, since you will do worse than someone who has only a 95% automation rate that can handle the outliers in an hour or two. This kind of thing drove installation of OSS/BSS systems.  (He reviewed that the first system was TRKS in 1968, and it’s still running!)

 

The Local companies have the most efficient processes anywhere, but it’s not necessarily relevant:

-          POTS is in decline as is private line

-          Mobile is taking over and it has fundamental differences.

-          The supply environment is much more complex (partners, embedding in other applications etc.)

 

He went through some of the differences in the OSS/BSS need, from off-line to online, from subscriptions to marketing, from Billing to Payments, and the ability to manage complex supply chains.

 

He talked about a new model where users could easily obtain the services they want and a set of systems that help the operating company realize this through marketing campaigns, promotions, management of the provisioning, and user interfaces.  New networks bring many challenges, like “can I deliver this content to this user on this device”.  Users need help in understanding what they are getting.  Everyone knows what a Megabyte is but few know in advance how many megabytes are needed to do something on a website or to download some item of content. 

 

He asked if anyone here used Google+ (nobody – It’s Google’s answer to Facebook and I’ve looked at it but not yet a user). 

 

He talked about the personal fitness equipment network – now he uploads it and it’s just for him.  Imagine if this were public and someone could get access to it to do things like compare your EKG against standards used to predict people vulnerable to heart attacks.  (Comment – Yikes!  I can’t imagine how to do things like this without having them considered intrusive.)

 

Question (Stuart), really addressed to the whole community.  Why do we often find Telcos abusing their customers?  By this he means having customers get very unpleasant billing surprises because of simple mistakes.  We should have systems that prevent this and we do have them, is the problem we aren’t using them?  Another example is that with the complexity of price plans end users are often in price plans that are bad for them, and never learn it until a competitor goes after them and they are lost to their current service provider because the user perceives that their bad plan is the best the service provider can offer.  Beau commented that a frequent problem is legacy billing systems.  Most operators offered unlimited mobile plans because at the time they couldn’t do anything else.  Now they can do better, but are slow to be able to do it.

 

Observation:  What we call data may in fact be voice (Skype, Google Talk, etc.)  Is the cross over from voice to data really as rapid as predicted?  (Comment – does it matter?  If the user treats it as data it’s data!, Beau agreed with this as I was writing it.)  Beau also observed that the same problem pervades the crossover to video – if video looks like data to us we have to optimize the handling of data to meet the needs of video to keep the customer happy.  Stuart – Voice is data with a latency requirement.  (Comment – yes, but if the network is “good enough” and customer expectations are “low enough”, that requirement becomes unimportant, which is I believe where we are with services like Skype.)

 

Comment from Max:  The roaming issue is complex.  The real problem is that no system provides the information needed in a timely way that will notice, for example, that someone is roaming with a data intensive iPhone app installed before they run up a large bill.  The system was never designed to do these things.  Stuart agreed with the view that smart phones took Telcos by surprise and this had big implications.  His question was what sort of time scale is needed to make the changes needed to address this?  Answer (Beau) – much longer than we need it to be.  (Comment – you might need to introduce fundamental new capabilities in LOTS of network elements to do something fundamentally new like real time billing and usage restriction.)  Henning – we are doing better with LTE, but there are still surprises.

 

Question (Andreas):  Moving away from “Bill Shock”, he wants to know what the impact of cloud technology is on this area.  Is there an opportunity to apply the cloud to revolutionize Telco operations by replacing big networks of dedicated equipment with cloud based services that do the same thing?  Maybe the model is even more radical “OSS/BSS” as a service?  Henning – we are doing that and some of that is available today.  The non-real-time nature of today’s OSS/BSS should make this easy because it doesn’t stress cloud performance.  Beau – they have a lot of cloud based OSS/BSS.  There is an issue though – the link to the cloud costs money and in many cases is a large part of the cost.  Part of the problem is that the cloud providers are mostly in the US, but your demand might come from the 3rd world where data connectivity to the US is expensive.  In fact they did some case studies and often found it was cheaper just to buy everything and install it locally.  (Comment – that’s kind of a condemnation of the approaches of our industry.  I guess I don’t see that the data communications requirements for OSS/BSS are exceptional by today’s standards and as a result this shouldn’t be a barrier.  Is it really harder to do this than to have end users hitting Google and Facebook pages?)

 

Question (missed a lot of this but it was about the OSS/BSS monitoring the network to determine users with unregistered devices or bad plans and pro-actively offer to fix the situation.)  Beau agreed that this was possible, but they need to work closer with handset manufacturers to, for example, be able to shut down or slow down applications which are burning up battery or creating an expensive roaming demand.

 

Henning – someone should be looking at the network impacts of a new app as part of  introducing the app so users know what to suspect.

 

Stuart – what capability do operators have to detect kinds of traffic?  Beau – you can determine the phone type and firmware and other characteristics that the vendor exposes.  To discover what users are actually doing you have to put probes in your network (Comment – this is like application aware packet filtering or firewalls)

 

Question:  Self Organizing networks are a nice idea but not the whole story.  How do you handle things like security and emergency service?  (The questioner was really raising an objection to the all IP network as an assumption because SIP won’t solve some of these problems as well as SS7 did.)  Henning – Agreed with need to ask questions about what we are doing (i.e. is it really effective to migrate everything to IP given we have a big network installed and working to handle voice).  (Comment – I expect that the migration to IP is in fact unstoppable.  This is essentially what us old timers would see as “Betamax vs Vhs”.  It doesn’t  matter which is the better technology, if all the effort is going into one technology the other will atrophy and become much more expensive.

 

Stuart – cybercrime is now a much bigger issue than it used to be.  How resilient and secure are self organizing networks?  There are people working on trying to hack into networks and shut them down today.  The threat is real, we have to be able to be robust against it.  Henning – They worked to design in security (he went through some of the details).  He agreed that this is a much more significant threat as hackers gain access to big capacities from botnets (or supercomputers) which enable attacks that the designers may not have considered feasible.  Also we have to be careful about assumptions about behavior.  Beau:  When Telcordia was acquired by Ericsson they actually had a cybersecurity unit that in part tried to hack into companies to demonstrate vulnerabilities.  Appallingly they were 70% successful even with companies that weren’t on the Internet.  Telecom has avoided this because we aren’t the most interesting target.  (Comment – as someone who has worked in computer and communications security I always have to point out to people that the majority of serious break-ins occur not because the hacker broke a lock (got through some way the designers anticipated), but because they found a back door based on faulty implementations, human behavior, or simply unanticipated manipulation of the environment (e.g. taking control of a drone aircraft by deploying nodes that emit false GPS signals that convince it to fly where the hacker wants.)   This is an increasing problem as systems get more complex.

Poster/Demo Session (Warren Montgomery, Insight Research Chair)

The session had 4 presentations accepted from the call for papers, plus two sponsor’s booths. 

End-user Configuration of Telco Services (Richard Sanders, Sintef)

This was a joint project between Sintef and Gintel, both from Norway and both companies who have presented service creation or execution environment work before.  The demonstration was of a tool and execution environment running on an Android Phone that allows the user to piece together call handling “scripts” that control how calls will be handled.  The App runs entirely on the phone and can be freely downloaded.  The editor is reminiscent of the Service Independent Building Blocks from IN.

 

The demo worked reasonably well given the limitations of the Android environment (Apparently it is not possible for the script to divert a call before the phone tries to ring the first time due to the structure of the firmware).  They are working on extensions that allow this environment to interwork with a network based application server, and to give an enterprise focused variant of service creation that allows the enterprise manager to establish basic call handling policies which users can customize. (Comment – I’ve probably seen demonstrations of user-oriented service creation environments in every conference on network services I’ve attended for the past 15+ years.  The technology here is interesting, but the larger question may be whether this will ever be something that a lot of end-users will want to do.

Base Station Direct Transmission for Mobile Enterprise Service – Opportunities, Challenges and Solution (Xiong-Yan Tang, China Unicom)

This was a classic poster session presentation without a demo.  The authors have a proposal to basically allow the lower levels of the mobile data network architecture to be directly connected to reflect end-user patterns and optimize the flow of data.  This is analogous to creating a private network for fixed voice and data to serve enterprise needs which may require large volumes with strict QoS requirements between fixed locations.  As more businesses become mobile only this is more important.

 

Privacy Preserving Data Mining Demonstrator Martin Beck, TU Dresden (with Nokia Siemens authors)

This demonstration was really part of a conference presentation where he talked about the details of how they can transform databases to allow data mining without revealing individuals.  Most of my comments are in session.

Monitoring and Abstraction for the Networked Cloud (Michael Schraf, Alcatel-Lucent)

This was also a demonstration based on a presentation elsewhere in the conference.  The system was demonstrated to create and monitor a virtual desktop application based on the IETF ALTO (Application Level Traffic Optimization) standard.

 

 

Session 4B – Public Safety and Public Policy (Bernard Villain, Cassidian Chair)

Proposed Presence System for Safety Confirmation (Hiroaki Nishihatana Kogakum University, Japan)

The paper was joint work with NTT and really a response to the needs arising from the 2011 Japan earthquake and Tsunami.  The focus of the work is making it easy for people to notify others that they are safe and where they are in an emergency.  It is partly based on the observation that the telephone network is often choked and unusable at these times, while the internet will allow communication.

 

The system operates based on presence information that generally includes where the user is and their status.  Users connect to the system and update their presence information over time, and the user saves that information and makes it available to others who are seeking their status.  In order to be useful the system has to make it easy for users to continuously update their presence status so that the information is current, and must provide storage for information when the user is not connected (i.e. display where they were and when they were last there.

 

The solution includes the use of a contactless smart card and external social networks.  A contactless smart card can be sensed by stations distributed at various locations and contains the user’s identity information as well as information about where their profile information is stored.  When the card is swiped, that information is sent to a presence server.  The presence server contacts the profile server to obtain full user information.  The profile server (perhaps a social networking service) maintains a log of where the user has checked in so that others can determine where the user was last “checked in.”)  Building on this made it easier for the user to update their information. 

 

To find a person in an emergency, someone swipes their card, and if it is the first time they have used the system they have to enter their “social networking” information (i.e. the profile service).  Otherwise that information is already stored on the profile server.  If the person who is “lost” has granted access to their log either publically or specifically to the searcher that information is made available.  The presence server automates this process using web APIs to access the social networking service.

 

They built a prototype using a display terminal and illustrated it in a video.  The card reader is a standard piece of equipment and plugs into a USB port.  As soon as a user logs in, they get displays of the logs for themselves and their friends (Comment – this might get to be a lot of information, but if they don’t display all the “friends”, it would need some way for a user to select which one was of interest which is I suspect why the prototype worked this way)

 

He reported a trial they conducted in a university environment and looked at how long it took a user to read about the system and understand how to use it, and also how long it took for the user to successfully use it.  The results indicated that this smart card based system was both much easier to understand and quicker to use than 3 other systems under evaluation for “safety check in”. 

 

Future work will include addressing some of the clear privacy issues with this work. 

 

Question (Me) – The list of contacts may be very large, how do you select the one of interest?  Answer (given by one of his co-authors) – The followup work will display that information on the users smart phone.  (Actually he indicated that they expect the smart card to be built into the back side of a smartphone making this easy to use), allowing the user to display the information and select it from a browser screen.  (Comment – It still sounds to me like potentially a lot of information has to be moved around to do this and being able to accomplish this in an emergency situation with congested networks might be difficult)

 

Question (Stuart) – He wanted to know more about the smart card reader and how they would deploy that.  Answer – they expect that these will be commonly mounted at entrances to facilities like classrooms allowing the “swipe” to be done effortlessly by the user.

Sightfinder: Enhanced Videophone Service Utilizing Media Processing.  (Harundo Katoaka, NTT)

 

NTT has done previous work on Alternate Reality under the name “AiR Stamp” integrating videophone with augmented reality.  This basically allowed a video to be overlain by generated information out of a database to address needs like indicating how to operate or repair a piece of equipment, showing where some component is.  This work was motivated by the need to developed more enhanced services.

 

She reviewed related work, including “Google Glass”, which uses special eyewear to overlay the user’s actual view with AR information.  The need for special glasses was felt to be a limitation.  As background she also talked about conventional IVR systems which deliver information via voice menus, and indicated that they need the equivalent for video/graphics information.

 

Their requirements included avoiding specialized devices (e.g. AR glasses), accepting any kind of media processing technologies, and providing for personalization of “IVR” by allowing the user attributes to customize what response is given and how it is displayed.

 

The architecture uses a videophone to connect to a media processing engine.  The media processor has parallel recognition systems that allow the server to recognize relevant information in the video.  The system can access an external presence server for user information based on who is calling and where they are.  Finally, information is fed to an MMCU which generates an augmented video and voice feed which is sent back to the user. 

 

Some applications:

-          Reading important safety information for visually impaired users.  (The user wears a camera which is continuously connected to the system which can recognize objects and even read warning signs giving a response via audio to the users phone).

-          Interactive marketing.

-          Tourist information

 

She played a video illustrating the service for the visually impaired.  (Comment – another obvious use from the video is real-time translation of important signs and other information for a user who cannot read the local language!)  The system demonstration showed a blind user getting an alert of a barrier in their path and being successfully detoured around it.

 

One thing they looked at was how long it would take for the system to recognize important features to determine whether it could actually work in real time.  The prototype used Wifi as the communication mechanism and a local network.  Processing times ranged from a hundred milliseconds to a second and a half.  The objects could be recognized within distances ranging from 1.5 to 3.6 meters, and the processing times translated into a distance the user could walk while processing of about a meter or less.  Their conclusion was that it would allow real time assistance.  (Comment – yes, but just barely.  I don’t know how long it will then take the user to recognize that the system is providing some urgent feedback (“STOP!”) and then how long it would take the user to react to it.  I believe human reaction times are also on the order of a second for things like this and that might not be enough to keep the user from colliding with an obstacle)

 

Question (Bernard) – How good is the recognition.  Are there false recognitions or objects that the system cannot recognize?  Answer – they apparently haven’t really tested this in the prototype.

 

Question (Me) – Could you use this to assist someone travelling in a country where they don’t know the language .  Answer – yes – she had some more slides on this as another use case showing how the system could help a user understand a Tsunami warning sign and where the evacuation area is.

 

Comment (Stuart) – He has seen demos of “Google Glasses”, and their whole focus seems to be on enhancing a users experience, versus this work that is aimed at providing assistance, and he congratulated them on focusing on public safety applications.

 

A Perspective on Regulation and Users’ Experience (David Ludlam, Retired)

David is a friend and was a long time employee of Marconi.  He is a past chair of ICIN and has a long history of doing interesting work in network intelligence.  He began with a bit of retrospective about his tenure as TPC chair.  He was acknowledging the dedication of his colleagues from Japan in supporting the conference and was very encouraged to see young people from Japan presenting interesting work and travelling to Europe to do it.

 

David’s main topic was motivated by discussions and events at ICIN 2011.  One speaker from Berlin-Brandenburg ICT made the remark that he had concerns about privacy, but his daughter does not.  An interesting statement about the times.  He reviewed Roberto Minerva’s comment (Made I believe during lunch that year) “Would you accept a mail service where the postman opens and reads all your letters?”  Obviously not, but Google claims email is different.  Is it?

 

He reminded us that Falk Von Borstaedt commented in his best presentation award presentation that an illegal file download blocked thousands of credit card transactions, and the fault is really the lack of “calling party pays” as a rule.  He cited another speaker from Zambia who was looking at a better future based on Western models of regulation, and David questioned whether that was in fact the case.  He cited a comment from Chet McQuaide quoting the prosecuting attourney who pursued the AT&T breakup of 1984 describing it as a grand economic experiment.  He also cited a book (“deal of the century”?) on the AT&T breakup, which had data from consumer surveys that indicated that 80% of Americans were satisfied with their telephone service before the breakup and 3 years after it 25% of people thought it was a good idea.

 

He overviewed some discussions on “junk” calls, which are not just a nuisance because they can mean fraud.  The internet has clear problems with privacy and identity.  For example, his browser questioned the authenticity of an Air France travel cite and the UK government. 

 

“The industry has been uninterested in fixed line telephony for 10 years, but a lot of people are still using it.)

 

In 2005, Kim Cameron proposed an identity metasystem that would provide a route to solve the identity problem in the internet.  The problem is that the internet was designed without identity in mind, and the threat is that fraud will cause people to mistrust the network and impede usage.  Have things changed since 2007? (Not really, but the problem has gotten worse). 

 

The World Conference in Telecommuncations has proposed that “Calling Party ID is a basic right to any called party” (with some exceptions by country in complying with local law.)  He urged everyone to support this.

 

He indicated that the EU is updating the 1994 data protection directive, to insist that a user’s explicit consent is required before his data is processed, data portability is insured, and that users have the right to have their data removed, and probably most important that this apply to any company doing business in the EU (Comment – this is the sticky point that causes the US to be blacklisted for data storage and processing by the EU)

 

In 2001, many established professionals pleaded with Michael Powell (Chair FCC) to insist that VoIP providers support the regulatory services required of local providers.  He didn’t, and in fact promised just the opposite.  10 years later the government is trying to rethink this.

 

Should services like Google be regulated to force them to provide their services to others at a fair price?

 

David closed with two proposals:

-          Calling number should be the norm, and exceptions rare (and never allowed for businesses).  The goal should be to provide calling name, not just a phone number.  (Comment – user privacy concerns drive the ability to block caller ID in the US)

-          In addition to the proposed EU directive on data privacy, David believes that users should have a right to receive compensation for the value of the data they provide to internet services.

 

He ended with a plea for more solid work in this area to report in 2013.

 

Question (Hans Stoking):  Young people might not care about their privacy now, but they are learning that it may be more important in the future (i.e. using old posted data against you in a job interview).  Should privacy be “opt in”, or should we proactively protect users from themselves?  (Answer – maybe the EU directive will help?)

 

Question (Hui-Lan Lu):  Clarify the remark about the browser warning.  He felt that he was getting a defective tool if it couldn’t reliably identify whether the site he visited was genuine or not.  He felt it was a problem that he had to ignore a warning from the browser not to buy (Comment – isn’t this like having to check “agree” to 10 pages of legalese you can’t read just to do anything these days?)  Hui-lan went on to describe the PKI infrastructure and why this was hard.  (Comment – yes it is, but I expect that nobody really cares whether the problem is a bad certificate authority, a defective browser, or real fraud, they just want it to work!)

 

Question (Me) – privacy concerns apply to the caller too, don’t individuals have a right to be anonymous?  David – agrees, but that shouldn’t apply to a business (Comment – yes, I agree with that)

 

Question (Stuart) – he has done work for governments, and commented on “The right to be forgotten” – today, you have NO legal rights to be forgotten.  The real problem though is that once you put the data out there even if you can force Google to give it up, others might copy it.  There is no way that you can guarantee that those who have copied it forget it.  (In particular it is believed that the US government archives a lot of data from the internet and is not subject to this, but it applies to every organization that mines data, for legitimate or critical activities.)  (Comment – this was of course my other question – is it really practical to track down and delete everything that might have been put on the internet?)

 

Stuart made analogy to what is happening now in the internet with what happened in banking “Investment Banking” brought in traders and scammers driven by greed and not following any code of personal responsibility and created a banking crisis, are Google and Facebook the analog of this in the communications world?

 

Chet – Amplified the concerns about lack of trust causes internet commerce to collapse.

Session 5A: Context and Application Awareness (Rebecca Copeland, Core Viewpoint Chair)

She introduced the session with some examples of context – planes and birds flying in formation.  “We first make the tools, then the tools make us!” Marshal Mcluhan.  (Comment – I never thought about this, but one of the early characterization  of Artificial Intelligence was systems that “Do what I mean” (instead of what I said).  That is the essence of what context based systems are about.)

 

(Another comment – as I wrote this I was struggling with the dark side of context awareness.  I was trying to use Google to locate the transit stop nearest tonight’s locations for dinner and the organ recital, and Google is clearly Context Aware -- so much so that it insists on sending me to “Google.de”, the German Google because my network access is coming from Berlin, with from which it was difficult to get an English language page!)

Context-Aware Service Composition Framework in the Web-enabled Building Automation System (Son N. Han.  Institute Mines-Telecom, Telcom SudParis)

He is a Ph.D. student (He works with Noel Crespi, who has been with ICIN for some time)  he described a lot of standards and associations in building automation and gave some information on how they work and how building information and “smart buildings” is becoming a very popular topic.  In reality, deployment of building automation is very low.  Why?  Because it costs too much.  Why does it cost too much?  Because everything is non-standard and proprietary.  There are also big issues of scale – too hard to integrate into many different interfaces.

 

As an alternative, he said people are looking at open web based standards for building automation, building on the extensive work and toolkits available for web services and protocols and interfaces.  UPNP, Device Profile for Web Services, and other interfaces are being applied here.

 

“Web of things” – a 5 layer web services architecture based on RESTful Apis for building automation. 

 

The system uses a building ontology to describe data in an XML schema which defines the relationships among the various elements

 

He showed a use case for a visitor reception service.  This included setting up messaging services, Elevator service, lighting, meeting, and emergency services.  This would be initiated by recognizing an RFID tag that, for example, identified the visitor as an ICIN attendee.  As an example of the kind of logic the system might need to determine which elevator to direct the visitor to based on their meeting if not all elevators serve the same floors (As is true in the Park Inn that is the host hotel for ICIN!)

 

Context and Application-awareness.  Context-Aware Data Discovery (Manfred Sneps-Sneppe, Ventspils University College)

 

He began talking about his background.  He is a 50 year veteran in Telecom, worked in Russia before 1991, and actually was in the Park Inn while it was still in East Berlin.  (He has a co-author at the University of Moscow)

 

The work uses Smart phone and Wi-Fi access to get information about the environment (“I am in a big mall and I want to know what’s around me”).  Wifi isn’t used for connectivity, it is used to discover data.  (Comment – I think what he means is using WiFi as a location technology to determine what data that can be accessed in any way is relevant)  Bluetooth could be used for this, but they didn’t have it on the Android phone they used for their work. 

 

The approach of SpotEx was as follows.  You could establish rules like “If a certain SSID (Wifi base station) is visible and it is “lunch time”, then display a coupon for a nearby restaurant.” 

 

“Reality mining”:  Use logged mac-addresses for discovering mass behavior.  (i.e. track where people go when to figure out what they do and where they are likely to go next.  (Comment – I feel like I just walked through some kind of wormhole to an alternate universe of views on privacy compared to the last session.) 

 

He compared their service to Foursquare and other existing “check in” based services:

Foursquare has similar technology to automatically check users in to locations.  Foursquare is optimized towards their particular business model which involves checking users into commercial locations and rewarding them.  Spotx allows anonymous use, which Foursquare does not.

 

Future work:

-          They need a developer’s API

-          They want to apply the proximity information they get to integrate with social networks (Twitter, etc.) and augmented reality. 

-          They want to support WiFi access for data. 

-          They want to replace their external database with HTML5 code using Web intents (details beyond the presentation)

 

Conclusion – You can do context awareness presence and location based on WiFi and Bluetooth visibility.  You can use a smart phone as a proximity sensor.  The software is complex, HTML5 may help here.  This system can have commercial applications.

 

Question:  How does the detection really work (I think he didn’t really understand how the WiFi beacon that presents the information works).  Yes, we can detect

 

Question (Chet McQuaide):  What prevents someone from using this to push malicious data to a smart phone?  Rebecca said nothing in principle though in their implementation it can’t happen because the data comes from a private database.  (Comment – yes, but this is exactly the kind of door that system designers implement without adequately thinking through the potential for malicious use.  The problem is that many mechanisms which offer users something convenient and easy, are if not very carefully designed open to hacking and abuse.)

Application-Aware Traffic Control for Network-based Data Processing Services (Kouichirou Amemiya, Fujitsu)

 

“Big data” causes issues of congestion in networks.  The only solutions are either overprovision networks to handle peaks, or find a way to smooth out peak traffic and reduce peak demands.  Their approach is to spread out the data transport to avoid high peak loads by varying delay.  One challenge is recognizing the context of data transport needs, which applications need the data in real-time and which can wait.

 

The solution uses AATC nodes in a network.  Information is mined from the data to determine what the time constraints are, and the nodes will delay transmission of information that is less critical in favor of these that are.  Control is essentially based on time limits for when data must be delivered.  (Comment – this is essentially what was called “deadline scheduling”, an algorithm for processor scheduling, which has been applied to data transmission.  I remember exploring something like this in designing a packet voice system in the mid 1980s, though I don’t recall the details.)

 

They did some performance evaluations.  What he showed was the amount of data sent over time both with and without AATC, both in absolute and relative to network link capacity.  They achieved a 58% reduction in the peak load.  They also showed a very large reduction in the maximum time required to transmit data, and a higher utilization of the network (I believe this was because the full data set required less time to transmit, so the same amount of data using the network was sent over a shorter time interval)

 

For this to work, there must be off-peak times on the network in order to make sure you  can get the non-urgent data through.  To make this work, it was better if each set of data had a different time constraint.

 

Their future work included more evaluation of this with different kinds of data loads.

 

Question (Rebecca) how do you know what data is urgent?  Answer they have an analyzer that does that.

Video Communication for Networked Communities: Challenges and Opportunities (Time Stevens, British Telecom)

 

There is a cycle going on where Technology drives applications which influence behavior which push apps, which cause more technology, etc.  As part of this trend they are looking at the opportunity to use the existing technology to establish large ad-hoc interactions (video conferences) based on what is available, and what you can do to help achieve true telepresence.

 

They took as a scenario support of people engaged in a shared activity, where they can come and go as they please.  One scenario was based on actors engaged in a performance

 

The performance use case set up two rooms each with a huge screen and cameras, and they had groups of actors and groups of dance performers interact through it.  It worked pretty well for the dancers and for people conducting debates.

 

The next phase will use a more complex room with multiple cameras and be focused on political debate.

 

The current setup has 3 rooms, each with a big TV, 3 HD cameras, and a microphone away.  End-to-end delay is about half a second.  One of the things they are doing is coordinating the video with who is speaking, and the time required for detection is an issue.  If they can achieve the goal of cutting to the right camera before the person starts to speak that significantly improves the perception of  tele-presence. 

 

One challenge is the amount of variation in the technology that has to be accommodated:  Multiple standards, multiple platforms for “web”, etc.  It works in the lab – the big goal is making this work over commercial networks.  Problems include asymmetry (i.e. you can’t afford to send 3 HD video streams and let the destination room decide which one is relevant.)  Having different equipment in different destinations is a problem as well, because it is an essential consequence of being ad hoc and dynamic. 

 

Much of the focus is on automating the job of a TV production director – this isn’t easy, it’s a job that requires years to learn and to do well and has a lot of ad hoc knowledge, in addition to coping with the technical issues.  They are running 3 trials in sequence and using feedback to refine the offering.  (Comment – I don’t know whether it was the effect of a late night or just the way this was organized, but I felt like I was at the receiving end of a fire hose of “ad hoc” information rather than an organized presentation of a concept)

 

One observation he made was that the ability of a 24 hour a day shared space is very different from a traditional conference – people can come and go.

 

There are problems – musicians can’t make this work very well because even a 50ms delay is enough to create problems.

 

Dancers took advantage of the unique aspects of this, which included the fact that the 2 or 3 rooms give you more space (you can use all your floor without running into people in the other rooms), and you can use the perspective effects of the camera (i.e. you are bigger if you are close to it) in interesting ways.

 

Rebecca – to make this scientific, you have to have a way of measuring user quality of experience to determine if you are doing something useful to people.  They have people doing statistics based on surveys, but it’s not Tim’s area of expertise.

 

Question:  What kinds of synchronization issues are there given there are different delays into different rooms?  Hans Stokking covered this 2 years ago noting that the fact that different people get information on, say, a shared soccer game at different times.  (Comment – this is interesting.  My recent experience at the Ryder Cup golf tournament, which for the first time allowed mobile devices, was that the lack of synchronization in when information was delivered to spectators caused the crowd reactions to be much more diffused than in other events where the only place people got information was from scoreboards, so that everyone in an area at least got it at the same time.)

 

Another question was on building automation and could it address the need as we have now, to swipe a card to gain access to some area of the building.  Yes, that would be nice.

 

Question (Bruno Chartras) Is the conferencing work related to the IETF CLU effort on multi-media conferencing?  Answer – doesn’t know if anyone in the project is communicating with the IETF team.  They are aware of it.  (Comment – In spite of all the technology out there, the Atlantic Ocean remains a real barrier to communication)

 

Question (Chet McQuaide) – in mid 1980s he experienced an old meeting service (The old AT&T Picturephone meeting service).  That offered the choice of automatic switching of the camera to the speaker or just the background view, and most chose just the background because it was disruptive.  Are they having human factors look at things like this?  The response actually dealt with the technology of video switching in that by avoiding trans-coding and more modern processing he said they can switch quicker and make this less intrusive.  (Comment – I used the same service as Chet and indeed the automation didn’t work well from a human factors perspective)  He re-iterated that they are doing a lot to try to imitate what a skilled human director would do in choosing the right view.

 

Question (Rebecca) – Your paper talks about using emotions in this, what’s that about?  They have people looking at trying to recognize emotional responses in people and use that information in deciding what video to use.

 

Question Given the heterogeneous environment, how do you avoid transcoding?  Answer – the response seems to require more knowledge about video coding than I had but it sounded like “if the coders are similar enough you can get away with it”.

 

Question (On AATC):  Isn’t Net Neutrality a problem for this technology?  Response:  I was afraid of this question.  I think the response was that they would push the implementation into the application endpoint to allow the network transport to be neutral and not violate this.  I think the reality is this is beyond the scope of the work.

 

Summary from Rebecca – all of these presentations, while having a lot of variations, used the concept of middleware that operates at the edge of the network.  A second common theme was “big data” and the issues that it potentially raises.

 

Bottom line:  We are trying to humanize our tools.

 

Session 6B – Identity Management (Hui-Lan Lu, Alcatel-Lucent)

She started by showing the old cartoon “On the internet nobody knows you’re a dog”.  That was funny 15 years ago, but as we do more and more on line, identity and privacy have become serious issues.  Who may know that you are a dog?  Who may track you?  How can your personal data be used? What controls do you have? What technology can help?

 

Towards a User-Centric Personal Data Ecosystem (The Role of the Bank of Individuals’ Data)  (Corrado Moiso, Telecom Italia

 

He started with a common expression in Italy for someone who is not bright by analogy to a character who would fall for the offer to keep their money secure by putting it in someone else’s pocket.  We are no better than that in our acceptance of putting our data in the hands of Google, Facebook, Microsoft, and others.

 

He categorized three kinds of  personal data:

-          Volunteered data – created and explicitly shared by users.

-          Observed Data – captured by recording actions of people

-          Inferred data – data about people based on analysis of volunteered or observed data.

 

There is growing interest in personal data for several reasons:

-          Smartphones make it much easier to produce and share

-          Greater attention to privacy (e.g. Do Not Track)

-          Personal data is “The new oil” – the latest resource to be discovered to be valued.

 

The current model for personal data is organization-centric – data is stored in IT systems of organizations with limited or no control by the end user whose data it is.  Users don’t know and can’t profit from the value of this data.  There is a lot of concern about protection, but less about “disclosure”, which might be a bigger real concern.

 

They propose a user centric model, where each person has their own personal data store and controls its access.  Users can grant access to organizations they trust to use it, and others will not have access.  (Comment – this works fine for trusted organizations, but nothing in technology can stop an organization that you have authorized to access it from leaking it to others.)

 

This presents a  new business opportunity for companies to provide the personal data store.  He gave a lot of requirements on it.  A PDS is really like a local bank – it holds something of value for a community of users.  It can become the center of an ecosystem built around personal data, much like banks provide services that help people use and grow their financial assets. 

 

He gave an example – when you check into a hotel, you tell your personal data store to share information needed by the hotel about you, and the hotel adds the room bill at the end to your personal data store.

 

The PDS can serve as a moderator in a procurement process (you ask for a quotation anonymously and the PDS shares only whatever general information is needed to define your requirements, then you can pick a supplier based on the response before you identify yourself to your selected provider.)  It can also host a personal application database.  The personal data store also provides a way for a user to be paid for access to their data. 

 

(Comment – when I see things like this I am always reminded of the “outside the system” ways in which many violations of privacy occur today.  Consider this example as reported by a National Public Radio (US) feature on privacy and the internet.  A person was fired from a job because of something she did that was captured on a photo (the particulars are unimportant, but it was neither illegal nor by most standards immoral) on the internet.  The photo was captured and posted by someone she didn’t know, while someone else had “tagged” her in the photo.  The result of the tag caused that photo to appear visible to her management.  The point here is that production and storage of personal data is so complicated and much of it not under our personal control that it is difficult to imagine any kind of comprehensive solution.

 

They did some surveys on this on interest in various aspects of the PDS.  Interestingly enough, there was no big difference over age groups on this.  People are interested in some personal services like expense management.  Some are interested in selling their data, but generally only if it is anonymized. 

 

TI will explore this in their Territorial lab, which uses a lot of real users with Android phones as “beta testers” for a PDS service.  These people will be doing things that are part of their normal life and using this service to determine how useful it is. 

 

He showed the architecture for it (nothing radical). 

 

Is the PDS an opportunity for Telcos?  It’s outside traditional scope, but they are in a good position to deliver it.  Telcos have a high degree of user trust.  (Comment – maybe this varies by country.  The fact that most users don’t see their Telco as part of their data service applications makes this unclear.  I don’t know the level of trust for Telcos worldwide.

Privacy Preserving Data Miner. (Martin Beck, Nokia-Siemens)

 

He started by observing that if you intersect lists of voters and medical data within a particular age, sex, and zip code you can probably uniquely identify up to 80% of people in the US.  The message is that privacy is a serious problem when data mining and powerful data analysis is used, even if the databases don’t identify individual.

 

He overviewed typical database technology and organization, then talked about Anonymization.  The idea is that a database of real users is transformed in some way to prevent identifying individuals before handing it over for analysis.  K-anonymity – insure that any subset of the database that can be selected through analysis has at least K individuals. 

 

One way to do it is to “fuzzify” the data (i.e. give an age or zip code range instead of making it unique.  Another way is to add random noise that should not impact the overall statistics.  Both techniques impact what the value of the data is in data mining.  There are ways to analyze how much that impact is and identify “good utility” techniques and “bad utility” sets.

 

They did some demonstration work based on US Census data.  Their demonstrator selects what kind of anonymizing to apply based on what mining queries are to be applied, and they compared the results of applying that to the anonymized data with that gotten from the original data.

 

The results they got were for something that would predict the income and number of children for a group of people.  In one data set where the predictor wasn’t perfect on the original, the results of applying it to the k-anonymized data was pretty good until k got to maybe 25 or 30.  However with a query that had a much tighter prediction range the degradation with increasing privacy levels was more rapid.

 

Their conclusion was that they have demonstrated basic feasibility, and they can use anonymized data in queries and that it would be useful for some applications (potentially advertising).  He also mentioned this as a way of overcoming restrictions imposed by governments against data mining (i.e. it might not be great, but it’s better than nothing.)

 

Question:  What’s the motivation for people that are now sitting on massive databases that they get value from for going to a PDS approach?  Answer:  Maybe legal pressure, but also because the PDS is more comprehensive it allows more information to be aggregated, and the data is likely to be more complete and more accurate than the things collected by companies.

 

Question (Chet McQuaide):  How do you deal with the inherent bias when people volunteer information (Satisfied customers don’t respond nearly as often as dissatisfied ones).  He cited some problem in getting fitness data that is biased because the overweight don’t respond, and as a solution they have algorithms to correct for stuff like that.

 

Question (Igor Faynberg) is there a risk that people will put the wrong data in the PDS?  Corrado’s response was a complex example involving a user who requested a shared ride and was insisting on some things about the driver’s record and said that the PDS could verify this data for people with the government authority controlling driver’s licensing.

 

Question (Chet McQuaide):  Did you come out with any rules of thumb for what level of anonymization is adequate to insure privacy.  Answer – he had some ideas, but basically felt it wasn’t up to them to specify that, that should be under control of users.  (Comment – the trouble of course is how do you have any consensus when the database contains millions of users.)

 

Corrado’s response was that for their scheme they wanted to allow users to control what data would be released to data mining and specifically how it might be anonymized.

 

Question:  starting a new ecosystem is a chicken and egg problem.  How do you get started?  They are approaching it from both sides – are people interested in putting data there?  Are companies interested in accessing it?  (Comment – the trouble is that unless everyone agrees to rules on how to use personal data the user doesn’t get the full benefit.)

 

Question:  me – personal data is like CO2 in the atmosphere, it gets out there as the result of lots of individual decisions, how do you change the culture so that people will use this?  (This stimulated some discussion on incentives, but nothing conclusive).

 

Question (Stuart) – who really owns your data.  If you die, your will controls what happens to the money you have in the bank, but what happens to your personal data?  Corrado – He is working with someone in a university who is looking at issues like this.  Personal data is like Correspondence.  Who owns your letters after you are gone?  Roberto Minerva – gave an example of the trouble of inheriting data – Bruce Willis (actor) wanted to pass on his iTunes music to his children in his will, but was told that wasn’t legally possible.  (Comment – that’s because he doesn’t own it under the terms of the license, it’s not really the same thing)

 

David – if 30% of the people want to sign up for the PDS, are there any common characteristics.  (Hmm, we are talking about data mining applied to answers to a questionnaire on privacy!)  Answer, he didn’t look at it.

 

Session 7: Panel – Is Cloud Security ready for Mission Critical Applications  (Andreas Lemke, Alcatel-Lucent)

He has a Ph.D. in CS and worked in research and CTO consulting on architecture, now working extensively in cloud computing.

 

The panelists include:

-          Marc Cheboldaeff, T-Systems.  He started his career building IN services, then did prepaid services and more general on-line charging systems.  Most of his experience is from the vendor side, recently joint T-Systems and Cloud is a big deal.

-          Thomas Doms, TUV Trust IT.  Trust IT is a security certification agency.  They have developed a strategy for certifying cloud based businesses.  He is in strategy.

-          Daniel Eberhardt, Savvis.  Savvis is an IT Infrastructure outsourcer.  He started work doing billing applications as a systems engineer.  Now he works with companies migrating business services into the cloud.

-          Bernd Jager, Colt Technology Services.  He is part of the strategy/architecture team and an expert in technology.  The company started in telecom, now broader business services.  He chairs the Telcom working group on cloud security.  He considers himself an expert in security.

-          Sven Klindworth, BT Advice.  His function is doing IT consulting for BT’s global ICT customers.  He works in managed hosting, now cloud services.

 

One question Andreas had coming in was: “What are the real threats?”  What incidents have demonstrated security problems?  Consumer security incidents are well publicized but B2B security problems are not.  (Comment – maybe, but I think a lot of the “your SSN, login, or credit card information has been released” in fact are instances of B2B security problems.)

 

He gave an example of Mat Honan from Wired who was involved in a severe attack that wiped out his internet accounts and personal laptop.  He had accounts on Gmail, Twitter, and 5 years of pictures and other data on line.  He was the subject of a hack attack aimed at gaining access to his Twitter account.  The hacker started by using his billing address and credit card information that was hacked from Amazon (gotten by deceiving a human agent to release it), to get into his Apple account and reset his password, which could be done knowing the last 4 digits of the credit card that was paying for the account.  From there he broke into Gmail and reset Gmail (which sent the reset information to his apple account), then used Gmail to break into his twitter account).  He then wiped out all the stored data and invoked a synchronization function to remotely erase all his data on his Mac (An apple “Security” feature intended to protect someone who had his laptop stolen), all to prevent the owner from regaining access.  The result was loss of 5 years of his life essentially.  (Comment – this is why I personally have not gone to cloud based storage of personal data, maintain my own backups, and disable any remote administrative access to my equipment  I also try very hard to avoid storing any information that would threaten my finances or my identity on line or on my computer devices for the same reason!.)

 

This is an example of the kind of threat we face.  Privacy concerns are limiting the use of cloud deployments and it’s somewhat dependent on where you are and local concerns and/or awareness of privacy concerns.

 

(Thomas) From an end-user perspective the end-user shouldn’t notice when a cloud implementation is being used.  From a company perspective getting the same security as in a data center deployment should be easy, but it’s not default practice now.  Smaller companies usually have no security practices and no experience in evaluating those of a vendor.  Large companies have an IT security organization and it’s their responsibility to make sure that they select a vendor who matches their security processes.

 

(Sven) When you move to the cloud you have to change your business processes to make sure people from operations adequately communicate needs, and IT can adequately communicate and evaluate the cloud provider.  Some questions require details:  How are backups done?  Are old drives adequately erased?  Is networking between servers secure?  The Cloud vendor should be willing to work to meet your needs and customize their contract for you – if not, you probably have the wrong provider.

 

Andreas – Some take the view that the cloud should be more secure because they pay attention to software updates, backup, firewalls, etc. that the individual business may not be able to.

 

Question (Hui Lan)  (How do you evaluate the cloud provider on security?)  Brendt – they have assessments based on non-functional requirements.  Basically though it means the cloud provider has to match your needs.  The trouble is that most companies don’t know their requirements or can’t describe them adequately, particularly in security.  (Sven) – It’s worse than that.  The head of IT knows what good security looks like.  With cloud services the business unit often bypasses the IT department (“Let’s try salesforce.com”). The result is that the people who are setting up accounts and signing contracts don’t know what best practice is (e.g. all use the same account and password).

 

Andreas – Isn’t part of the threat that cloud implementations have more attack points?  Consider the Hypervisor which virtualizes the processing.  Unless all these elements unique to the cloud are carefully secured there are open doors.

 

Thomas – Yes, but remember the first level of security is with the application.  If the application isn’t careful about what it lets users to do and how it authorizes them that will be exploited.  Also social engineering – if your people don’t know how to be secure they will be the focus of the attack.

 

Daniel – the Hypervisor is indeed a point of vulnerability, if you get control there you can get at anything, but he’s not aware of any successful attack that came in this way.  (Comment – one of my rules of thumb about security is that the bad guy never tries to break in through the doors with locks on them – they find openings you didn’t even consider, like the ventilation shaft, or just following an authorized user who holds the front door open for them.)

 

Andreas – Is multi-tenancy a problem, where you might run on the same server as your competitor?  It’s scary?

Marc – Multi-tenancy is real, but not usually desired by big enterprises for sensitive applications.

Thomas – They have customers using shared platforms for mission critical applications.  They advise upgrading application security in preparation.

Daniel – The credit card companies know a lot about their risks in real $, and accept multi-tenancy as a risk they understand how to manage.

 

Andreas – Are governments ready for cloud deployment.  Daniel – they use community clouds (share only with other government function).  Some insist on private clouds.

Marc – Works with an oil exploration company who insists on a private cloud. [for sensitive data like the ones related to new areas of prospection]

Thomas – it’s not about the vertical function, it’s about the company.  Some are comfortable with shared clouds, some aren’t.

 

Andreas – is there value in considering data and processing differently?  (He asked about having the company keep the data and most of the time the applications, but be able to ramp up application processing in the cloud if needed).

 

Marc – some are applying cloud principles in their own data center and often people start there.  If they use standard tools, it becomes possible to “burst” into the cloud. [aka hybrid cloud: public/provider cloud]

Daniel – would look at this the other way around – you can store data securely in the cloud because you can encrypt it first.  Processing requires your data be open to attack.  Customers do this (consider backup).  Even for medical data – if you encrypt it with acceptable technology it meets government privacy laws.

Thomas – this is one way people meet business continuity requirements (i.e. don’t lose critical data if your data center is wiped out in a disaster).

Sven – In Europe we are still at an early stage – people try little things first with non critical data, if they like it, they move up to increasingly critical services.  “The appetite comes with the eating”.  He feels that European companies are not yet realizing the economic benefits of the cloud that US companies are.  They don’t think of the cloud first as the way to rapidly scale up a trial application.  They first design a custom solution.  When they have to expand internationally the IT department tells them it will take a year to do it, and only at that point do they go to the cloud.  He feels that the trend to the cloud is unstoppable, and we won’t have this discussion years from now.

 

Question (David Ludlam)  The UK just had the worst IT failure in the banking system that ever occurred – a major British bank couldn’t move money for its customers for 2 weeks.  Would the cloud have either prevented this or made it easier to recover?

 

Thomas – This was a human failure, the cloud couldn’t have helped.  Humans made a configuration error that created the problem.

Marc – He would expect to have quicker recovery because migration to a new server and moving virtual machines from one data center to another) is normal in the cloud. (Comment – most of the bad telecom failures involved corruption of data, not simply failures of equipment.  Migration of servers doesn’t help if the database is what’s bad.)

Daniel – moving memory images over networks creates some real security concerns.

 

Andreas – Is secure networking a big plus?

Marc – Yes, if the provider can handle the whole movement of data to/from the cloud it’s a big plus.  (i.e. handle the end-to-end chain involving the datacenter and the network)

Thomas – Yes, but in the real world of telecom networking and hosting don’t cooperate well.

Daniel – In a cloud implementation you are dependent on the network.  In 2006 they thought that DDOS (Distributed Denial of Service) attacks were no longer a concern, but in 2011, suddenly they are again hot because they become a way someone can attack the business operations of a company or government using networking in a cloud implementation.

 

Question:  Aren’t a lot of security issues independent of the clouds (for example, the insecure procedure at Amazon which allowed the bloggers credit card number to be retrieved)?

 

Daniel:  Yes, that’s social engineering and it’s not going to be helped by the cloud.  One issue is that cloud implementations make a lot of use of web services interfaces, which can provide attack points.

Thomas:  They had a client that implemented a cloud solution using APIs with good security, but saw a big increase in the number of attacks against those interfaces once they went to the cloud.

Daniel:  We know how to secure browsers, but we are learning we have to do better on APIs.

Bernd:  Each new device (e.g. mobile internet) opens new places to attack.  He also made the point that a shared data center is an attractive target because it contains data from many users and potentially many applications.

 

Question (Hui Lan)  Are new standards for API traffic related to identity management adequate? 

 

Daniel:  The new protocol under discussion looks really promising.

Thomas:  One thing they have done is implemented Rate limits on APIs, which limits DDOS attacks but also prevents credential guessing by limiting the speed at which it happens.

 

Andreas:  What is the impact of government regulations, in particular the US Patriot act which violates privacy requirements for many countries?

 

Sven:  Compliance is a much bigger issue than security in blocking migration to the cloud.  If the country requires data can’t leave the country they can’t use a cloud that stores the data elsewhere.  This is what blocks you, not security or APIs.

Daniel:  The Patriot act is the most famous case and is actually a marketing tool for EU based cloud providers, but in fact most EU countries have something similar.  He cited an obscure clause of the digital millennium copyright act which states that your data can’t be used with out your permission (or something like that, I didn’t catch all the details).

Marc:  They have to develop matrixes for requirements like this of where providers put the data and what the requirements for the customer are to try to match them.

Question (Igor Faynberg): For compliance, does data always mean anything or do the requirements for privacy distinguish between encrypted and plain data ?  (i.e. could you store encrypted data in the US and still comply with EU laws on privacy?)

 

(Several panelists said they didn’t know the details, one said something about adequate cryptography released some requirements but wasn’t sure about that one.)

 

Max Michel – this is a place where government is way behind what the reality in the internet is.  The real issue shouldn’t be where the physical disk is, but who can get access to it and that happens by networks, not physical proximity.

Closing Session

Stuart chaired the closing session for the conference.

Igor Faynberg – TPC chair for 2012

Igor has been with ICIN for 20 years – back then it was about sending information to telephone switches that would let them provide new services.  Users now do much more than set up calls, which creates many new issues.  Cloud Computing is an example.  He overviewed the content in terms of how they form edges of the whole picture – M2M, Clouds, Identity Management, Public Safety.  He commented very briefly on the smaller number of attendees here but highlighted that the quality of papers and presentations has probably never been higher (Comment – I agree with this.  I believe that in part the smaller numbers were due to changes in submission requirements and discounted conference fees, but that impact was primarily to discourage the submission of “weak” papers and attendance of those who did not feel strongly connected to the conference.  The papers and attendees we got represented those most active in the field with the most to contribute).  He thanked Bruno Chatras, who Igor worked with 20 years ago for all he did. (Bruno has been the secretary for the TPC for most of this time and will be the 2013 ICIN chair.)

 

Stuart Sharrock (ICIN Events)

Stuart  acknowledge the invaluable support of the student team who support the conference, and Mireille Edin who has managed logistics for the conference since I have been involved in the late 1990s. 

Stuart announced the conference awards for presentations and papers:

-          Best Paper overall:  Haruno KataokaSightfinder.

-          First runner up:  Corrado Moiso (personal data store paper.)

-          Second runner up: Kouichirou Amemiya (Application aware traffic control for network)

-          Best Presentation day 1:  Antonio Manzalini – towards Halos Networks

-          Best Presentation day 2:  David Ludlam.

-          Best Demonstration:  Michael Scharf, Alcatel-Lucent.

 

He also announced  this year’s recipient of the “honorary member” awards, given to those who have given long service to ICIN:

-          Roberto Minerva.  Roberto was TPC chair for 2011, and has been a frequent author and session chair and a member of the TPC for many years.  “Roberto is the telecom community’s Google”

-          Bruno Chatras.  Bruno has been secretary of the TPC for many years, and will be TPC chair for 2013.

 

With that the conference was closed.  In some years we have been able to announce the details of the next conference, but this year the TPC and other organizers are still considering ideas for improving the conference and increasing attendance which might impact timing or location, as well as putting together the themes for 2013.  We expect to be able to announce the details within a few weeks.