Workshops (new!)

4th SCLP'2011 workshop

Related Events



Tutorial 1 (February 07, 2010): Open smart cards for networks and cloud identity services
Duration: Half Day (From 14h to 18h)
Author : Professor Pascal Urien - Telecom ParisTech

Author Short BIO : Pascal Urien (www.enst.fr/~urien) is full professor at Telecom ParisTech; he graduated from Ecole Centrale de Lyon, holds a PHD in computer science, and founded in 2007 the EtherTrust Company (www.ethertrust.com). His main research interests include security and smart cards, especially for networks and distributed computing architectures. He holds fifteen patents and about one hundred publications in these domains. Pascal collaborates in several industrial committees like the IETF. He participated in various French and European research projects. He is the father of the internet smart card technology, which won two industrial awards, Best Technological Innovation at cartes'2000 (Paris) and Most Innovative Product of Year at the Advanced Card Award 2001 (London). He invented the EAP smart card, that won two industrial awards, Best Technological Innovation at cartes'2003 (Paris), and Breakthrough Innovation Award at CardTech/SecureTech 2004 (Washington DC). In 2006 he won a bronze award at the SecureTheWeb Developer Contest, organized by Gemalto and Microsoft. Pascal was one of the winners of French 9th and 11th national contest (2007 & 2009), for the support of innovative start-ups.

Tutorial Abstract

This tutorial presents open technologies enforcing trust, for access control in networks, services and applications. Internet technologies and WLANs deployments create ubiquitous services, providing file downloading and streaming facilities in a seamless way. Although efficient cryptographic algorithms (either symmetric or asymmetric) make it possible to design strong access protocols, trust is a key factor in order to avoid identity hijacking for the emerging "always on" society. For several years we work on open architectures, dealing with tamper resistant devices enforcing trust and access control for both WLANs and WEB applications. In 2008 more than 3 billion smart cards were produced; about one billion of these devices include a java virtual machine and execute java programs in trusted computing platforms. We introduce EAP smart cards embedding authentication methods (EAP-TLS, EAP-AKA...) whose interface is currently defined by an IETF draft, and which can be plugged in laptops, mobile phone or RADIUS servers. We show how such architectures may help to solve identity protection and hijacking issues. We underline EAP smart cards benefits for the processing of keys-tree, used for wireless and multimedia security services. Finally we present strong access controls mechanisms dedicated to WEB sites, based on embedded SSL stacks, enforcing trust in OpenID infrastructures and cloud computing facilities.

Previous Delivery: The first version of this tutorial was presented at ADVCOMP 2009, October 11-16, 2009, Malta. It was accepted at VTC FALL 2010, Ottawa.

Tutorial 2 (February 07, 2010): Joint Protocol-Channel decoding: Taking the best from noisy packets
Duration: Half Day (from 8h30 AM to 12h AM)
Author : 
Pierre Duhamel1 and Michel Kieffer1,2
1: L2S - CNRS - SUPELEC - Univ Paris-Sud 3 rue Joliot-curie, F-91192 Gif-sur-Yvette
2: on leave at LTCI - CNRS – Télécom ParisTech 46 rue Barrault F-75634 Paris Cedex 13 {pierre.duhamel, kieffer}@lss.supelec.fr

Author Short BIO :

Pierre Duhamel (Fellow IEEE, 1998, Fellow EURASIP, 2008) received the Eng. Degree in Electrical Engineering in 1975, the Dr. Eng. Degree in 1978, and the Doctorat ès sciences degree in 1986. From 1975 to 1980, he was with Thomson-CSF (now Thales) and joined the National Research Center in Telecommunications (CNET) in 1980. From 1993 to Sept. 2000, he has been professor and Dept. head at ENST, Paris (National School of Engineering in Telecommunications). He is now full time researcher with L2S - CNRS/SUPELEC/Univ Paris-Sud, with activities in Signal processing for Telecommunications.

Dr. Duhamel held several positions as member and chair of several technical committees of the IEEE. He was also an associate Editor of the IEEE Transactions on Signal Processing and of the IEEE Signal Processing Letters. He was Distinguished lecturer, IEEE, for 1999, and co-technical chair of ICASSP 06. He received the "Best paper award" from the IEEE transactions on SP in 1998, and was awarded the "grand prix France Telecom" by the French Science Academy in 2000. Dr Duhamel published more than 80 papers in international journals, more than 260 papers in international conferences, and holds 28 patents. He is a co-author of the book "Joint Source and Channel Decoding: A cross layer perspective with applications in video broadcasting" which appeared in 2009, Academic Press.

Michel Kieffer (Senior Member, IEEE, 2007) received in 1995 the Agrégation in Applied Physics at the Ecole Normale Supérieure de Cachan, France. He obtained a PhD degree in Control and Signal Processing in 1999, and the Habilitation à Diriger des Recherches degree in 2005, both from the Paris-Sud University, Orsay, France. Michel Kieffer is an associate professor in signal processing for communications at the Paris-Sud University and a researcher at the Laboratoire des Signaux et Systèmes, Gif-sur-Yvette, France. Since september 2009, he is on leave at the Laboratoire de Traitement et Communication de l’Information, CNRS-Télécom ParisTech.

His research interests are in joint source-channel coding and decoding techniques for the reliable transmission of multimedia contents. Michel Kieffer is co-author of more than 100 contributions in journals, conference proceedings, or books. He is one of the co-authors of the book Applied Interval Analysis published by Springer-Verlag in 2001 and of the book Joint Source-channel Decoding: A Cross-layer Perspective with Applications in Video Broadcasting published by Academic Press. Since 2008, he serves as associate editor of Signal Processing.

Tutorial Abstract

The widely used OSI layered model partitions networking tasks into distinct layers. This facilitates network design, since each layer has not to be aware of the information introduced by other layers, allowing heterogeneous contents to be delivered via the same communication network. Moreover, each layer, assuming that the lower layers behave perfectly, attempts to provide perfect information to the upper layers. For that purpose, error-detecting codes (CRC or checksums) have been introduced at various places of standard protocol stacks combined with retransmission mechanisms for data packets deemed as corrupted. Moreover, since the layers work independently, but sometimes require the knowledge of identical (or correlated) information, some redundancy may be found, essentially in the block headers that are processed at each layer. This redundancy has been recognized and used for example in ROHC for reducing the header lengths. However, the information contained in these headers at one layer could be very useful for performing tasks that are located at other layers. The usual way of taking advantage of this redundancy is to build joint approaches in order to improve performance and use of resources. In joint approaches, the network layers are obviously less compartmentalized: information previously available at a single layer may now be seen and used by other layers, since they can be of great help. The risk in such an approach would be a loss of the architectural coherence that was the primary driving force behind the use of decoupled layers. The role of Joint Protocol and Channel Decoding (JPCD) is to make an efficient (and joint) use of the redundancy present in the protocol layers as well as the redundancy introduced by the channel coding in order to obtain optimal performance. A partial effort in this direction was already done under the framework of cross layer techniques, but JPCD intends to make full use of all properties of the signal that is transmitted. This approach is especially useful in the context of mixed wired and wireless data transmission.

The aim of JPCD techniques is to improve the efficiency of various layers of the protocol stack based on the information provided by the channel combined with redundancy present in the protocol stack. With this strategy, more reliable header recovery can be performed, aggregated packets can be more efficiently delineated. More recently, it has also been shown that channel decoding may benefit from the redundancy present in the protocol stack (protocol-assisted channel decoding). As a result, one can obtain at each layer packets that are “more usable”, i.e., contain fewer errors. However, in a number of circumstances, this has to be obtained while maintaining the compatibility with the classical remedy to transmission errors: retransmission mechanisms. Thus, JPCD allows getting the best performance out of the received signals without considering changes to the way this signal is transmitted. Clearly, since JPCD is performed within the receiver, the ability to use JPCD tools with existing standards makes it potentially very practical. Therefore, there is a good possibility that JPCD could become practically used in the near future, provided that all of the different network layers can be incorporated in the process, and that the compatibility with existing mechanisms can be maintained. This tutorial provides the corresponding tools. This tutorial, build on top of the book Joint Source-Channel Decoding.

A Cross-Layer Perspective with Applications in Video Broadcasting over Mobile and Wireless Networks published in 2009 by Academic Press, provides a cross-layer perspective on JPCD tools. It identifies the various sources of redundancy present in a protocol stack. It then introduces tools, mainly taken from channel decoding, which enables classical operations in a protocol stack such as header recovery, de-encapsulation, packet delineation to be performed in a more reliable way in the presence of noise. Applications concerning the reliable transmission of multimedia (video and html files) contents are then provided. All these JPCD techniques increase the throughput of wireless communication channels, since fewer packets have to be retransmitted. Moreover, truly permeable protocol stack may be obtained at decoder side, allowing Joint Source-Channel Decoding techniques to process noisy data at application layer. Finally, this opens the floor for a true optimization of the redundancy allocation at the various layers of the protocol stack in order to obtain the best performance.


Keynote (February 8, 2010): Cyper-physical MPSoC Systems -
Adaptive Multi-Core Architectures for future Mobility & Nano Era Views

Prof. Dr.-Ing. Jürgen Becker
Karlsruhe Institute of Technology
KIT Dept. Electrical Engineering & Information Technology
Institute for Information Processing -

Author Short BIO : Jürgen Becker is Full Professor for Embedded Electronic Systems in the department of Electrical Engineering and Information Technology at Universität Karlsruhe (TH). His actual research is focused on industrial-driven System-on-Chip (SoC) integration with emphasis on adaptivity, e.g. dynamically reconfigurable hardware architecture development and application in automotive and communication systems. Prof. Becker is Head of the Institute for Information Processing (ITIV) and Department Director of Electronic Systems and Microsystems (ESM) at the Computer Science Research Center (FZI). From 2001- 2005 he has been Co-Director of the International Department at Universität Karlsruhe (TH). He is author and co-author of more than 250 scientific papers, and active as general and technical program chairman of national / international conferences and workshops. He is execu-tive board member of the german IEEE section, Board member of the GI/ITG Technical Committee of Architectures for VLSI Circuits, Associate Editor of different international Journals, and Senior Member of the IEEE. Since October 2005 Prof. Becker has been Vice-President ("Prorektor") for Studies and Teaching at Universität Karlsruhe (TH). Since October 2010 he is Chief Higher Education Officer (CHEO) of the new Karlsruhe Institute of Technology – KIT – the unique merger of a large national research lab in the Helmholtz Society as well as of a prominent state university of Baden-Wuerttemberg in Germany.


The field of embedded electronic systems, nowadays also called cyper-physical systems , is still emerging. A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. Today, a pre-cursor generation of cyber-physical systems can be found in areas as diverse as aerospace, automotive, chemical processes, civil infrastructure, energy, healthcare, manufacturing, transportation, entertainment, and consumer appliances. This generation is often referred to as embedded systems. In embedded systems the emphasis tends to be more on the computational elements, and less on an intense link between the computational and physical elements.

Multipurpose adaptivity and reliability features are playing more and more of a central role, especially while scaling silicon technologies down according to Moore´s benchmarks. Leading processor and mainframe companies are gaining more awareness of reconfigurable computing technologies due to increasing energy and cost constraints. My view is of an “all-win-symbiosis” of future silicon-based processor technologies and reconfigurable circuits/architectures. Dynamic and partial reconfiguration has progressed from academic labs to industry research and development groups, providing high adaptivity for a range of applications and situations. Reliability, failure-redundancy and run-time adaptivity using real-time hardware reconfiguration are important aspects for current and future embedded systems, e.g. for smart mobility in automotive, avionics, railway, etc.. Thus, s calability, as we have experienced for the last 35 years is at its end as we enter new phases of technology and certification within safety-critical application domains. Beyond the capabilities of traditional reconfigurable fabrics (like FPGAs), the so-called Nano Era with corresponding circuits/architectures allow for micro-mechanical switches that enable new memory and reconfiguration technologies with the advantage of online chip adaptivity and non-volatility. Transient faults may lead to unreliable information processing as information in nano-sized devices is much less. Power consumption and related problems present a challenge where information is processed within a smaller area/volume budget. This includes the consideration of appropriate fault tolerance techniques and especially the discussion of necessary efficient and online self-repairing mechanisms for driving such kind of future silicon and non-silicon based technologies and architectures.

This keynote will finally discuss in detail the corresponding challenges and specifically outline the promising perspectives for future multi-core as well as dynamically reconfigurable, complex, adaptive and reliable systems-on-chip, for embedded and also general purpose computing systems.

Keynote : Self-Aware Pervasive Systems
Prof Erol Gelenbe 
Professor in the Dennis Gabor Chair, Imperial College
Dept of Electrical and Electronics Eng'g
Head of Intelligent Systems and Networks Group
Imperial College, London SW7 2BT, UK   +44 207 594 6342

Author Short BIO :
Erol Gelenbe holds the Dennis Gabor Chair at Imperial College London. A graduate of METU Ankara, he received the DSc degree from Univ. Pierre et Marie Curie (Paris VI), and the MS and PhD degrees from the NYU Polytechnic Institute. Known for his recent work in autonomic networks, and for his pioneering contributions to computer system and network performance, Erol's research is currently funded by UK EPSRC and EU FP7. According to the PoP citation database, his citation count stands at h=44 and g=72. He also works with industry and
is a Fellow of IEEE, ACM and IEE. He won the UK's Sir Oliver Lodge Medal for Achievement in Information Technology in 2010, and the ACM SIGMETRICS Award in 2008 for his work on computer and network performance modelling and analysis. In 2010 he received the New York University Polytechnic Institute  Distinguished Alumnus Award and was elected Honorary Member of the Hungarian Academy of Sciences. He is a Member of the French National Academy of Engineering (Academie des Technologies), the Turkish Academy of Sciences, and Academia Europaea. He received the Italian honours of Commendatore al Merito (2005), and of Grande Ufficiale dell'Ordine della Stella della Solidarietà (2007) for develop certain fields of research in Italy. France awarded him the honour of Officer of Merit (2001). In 1996 he was the first computer scientist to receive the Grand Prix France Telecom of the French Academy of Sciences. He has received Honoris Causa doctorates from the University of Liège (Belgium), University of Roma II (Italy), and Bogazici University (Istanbul, Turkey).

Abstract :
Self-aware pervasive systems (SAPS) combine cyber-technical systems, sensors, decsion element, people, and possible mobile resources and assets, together with networks, to conduct real-time monitoring and action. An example is an emergency evacuation or management system for a building or some other geographic area. Such systems will observe their internal behaviour as well as the external systems that they interact with, in order to modify their own behaviour so as to adaptively achieve objectives, such as discovering services for their users, improving their own efficiency or Quality of Service (QoS), reducing their energy consumption, compensating for components which fail or malfunction, detecting and reacting to intrusions, and defending themselves against external attacks. A SAPS creates a distributed internal representation of its past and present experience, based on sensing and measurement, with proactive sensing as one of the concurrent activities that it undertakes, including performance and condition monitoring. Coupling of the measurement-sensory system, the internal representation, decision, and "motor" control, all distributed across a network, is the key feature of such a system.


ICC Event