Sataj Sahni

Professor of Computer and Information Sciences and Engineering Department University of Florida, USA

Title : Data structures and algorithms for packet forwarding and classification

Packet forwarding and classification at Internet speed is a challenging task.We review the data structures that have been proposed for the forwarding and classification of Internet packets. Data structures for both one-dimensional and multidimensional classification as well as for static and dynamic rule tables are reviewed. Sample structures include multibit one- and two-dimensional tries and hybrid shape shifting tries. Hardware assisted solutions such as Ternary Content Addressable Memories also are reviewed.

Sartaj Sahni is a Distinguished Professor and Chair of Computer and Information Sciences and Engineering at the University of Florida. He is also a member of the European Academy of Sciences, a Fellow of IEEE, ACM, AAAS, and Minnesota Supercomputer Institute, and a Distinguished Alumnus of the Indian Institute of Technology, Kanpur. In 1997, he was awarded the IEEE Computer Society Taylor L. Booth Education Award ``for contributions to Computer Science and Engineering education in the areas of data structures, algorithms, and parallel algorithms'', and in 2003, he was awarded the IEEE Computer Society W. Wallace McDowell Award ``for contributions to the theory of NP-hard and NP-complete problems''. Dr. Sahni was awarded the 2003 ACM Karl Karlstrom Outstanding Educator Award for ``outstanding contributions to computing education through inspired teaching, development of courses and curricula for distance education, contributions to professional societies, and authoring significant textbooks in several areas including discrete mathematics, data structures, algorithms, and parallel and distributed computing.'' Dr. Sahni received his B.Tech. (Electrical Engineering) degree from the Indian Institute of Technology, Kanpur, and the M.S. and Ph.D. degrees in Computer Science from Cornell University. Dr. Sahni has published over two hundred and eighty research papers and written 15 texts. His research publications are on the design and analysis of efficient algorithms, parallel computing, interconnection networks, design automation, and medical algorithms.      

Dr. Sahni is a co-editor-in-chief of the Journal of Parallel and Distributed Computing, a managing editor of the International Journal of Foundations of Computer Science, and a member of the editorial boards of Computer Systems: Science and Engineering, International Journal of High Performance Computing and Networking, International Journal of Distributed Sensor Networks and Parallel Processing Letters. He has served as program committee chair, general chair, and been a keynote speaker at many conferences. Dr. Sahni has served on several NSF and NIH panels and he has been involved as an external evaluator of several Computer Science and Engineering departments. 


H. J. Siegel

Professor of Department of Electrical and Computer Engineering
Colorado State University, USA

Title : Stochastically Robust Resource Management in Heterogeneous Parallel Computing Systems

What does it mean for a computer system to be "robust"? How can robustness be described? How does one determine if a claim of robustness is true? How can one decide which of two systems is more robust? Parallel computing, communication, and information systems are typically heterogeneous mixtures of machines and networks. They are used to execute collections of tasks with diverse computational requirements. A critical research problem is how to allocate resources to tasks to optimize some performance objective. However, systems frequently have degraded performance due to uncertainties, such as unexpected machine failures, changes in system workload, or inaccurate estimates of system parameters. It is important for system performance to be robust against these uncertainties. To accomplish this, we present a stochastic model for deriving the robustness of a resource allocation. This model assumes that stochastic (experiential) information is available about these parameters whose actual values are uncertain. The robustness of a resource allocation is quantified as the probability that a user-specified level of system performance can be met. We show how to use this stochastic model to evaluate the robustness of resource assignments and to design resource management heuristics that produce robust allocations. The stochastic robustness analysis approach can be applied to a variety of computing and communication system environments, including parallel, distributed, cluster, grid, Internet, cloud, embedded, multicore, content distribution networks, wireless networks, and sensor networks. Furthermore, the robustness model is generally applicable to design problems throughout various scientific and engineering fields.

H. J. Siegel is the George T. Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering at Colorado State University (CSU), where he is also a Professor of Computer Science. He is Director of the CSU Information Science and Technology Center (ISTeC), a university-wide organization for enhancing CSU's activities pertaining to the design and innovative application of computer, communication, and information systems. From 1976 to 2001, he was a Professor at Purdue University. He received two B.S. degrees from the Massachusetts Institute of Technology (MIT), and the M.A., M.S.E., and Ph.D. degrees from Princeton University. He is a Fellow of the IEEE and a Fellow of the ACM. Prof. Siegel has co-authored over 360 published technical papers in the areas of parallel and distributed computing. He was a Coeditor-in-Chief of the Journal of Parallel and Distributed Computing, and was on the Editorial Boards of the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. Home page:

Rajkumar Buyya

The University of Melbourne and Manjrasoft, Melbourne, Australia

Title : Market-Oriented Cloud Computing: Vision, Hype, and Reality of Delivering Computing as the 5th Utility

Computing is being transformed to a model consisting of services that are commoditised and delivered in a manner similar to utilities such as water, electricity, gas, and telephony.  In such a model, users access services based on their requirements without regard to where the services are hosted.  Several computing paradigms have promised to deliver this utility computing vision and they include Grid computing, P2P computing, and more recently Cloud computing.  The latter term denotes the infrastructure as a “Cloud” in which businesses and users are able to access applications from anywhere in the world on demand. Cloud computing delivers infrastructure, platform, and software (application) as services, which are made available as subscription-based services in a pay-as-you-go model to consumers. These services in industry are respectively referred to as Infrastructure as a Service (Iaas), Platform as a Service (PaaS), and Software as a Service (SaaS).  To realize Cloud computing, vendors such as Amazon, HP, IBM, and Sun are starting to create and deploy Clouds in various locations around the world.  In addition, companies with global operations require faster response time, and thus save time by distributing workload requests to multiple Clouds in various locations at the same time.  This creates the need for establishing a computing atmosphere for dynamically interconnecting and provisioning Clouds from multiple domains within and across enterprises.  There are many challenges involved in creating such Clouds and Cloud interconnections.

This keynote talk (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver the vision of computing utilities; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as VMs; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our recent initiative in Cloud Computing, called as Megha: (i) Aneka, a software system for providing PaaS within private or public Clouds and supporting market-oriented resource management, (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications, (iii) creation of 3rd party Cloud brokering services for content delivery network and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon and Nirvanix along with Grid mashups, and (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; and (5) concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision along with pathways for future research.

Dr. Rajkumar Buyya is an Associate Professor of Computer Science and Software Engineering; and Director of the Grid Computing and Distributed Systems Laboratory at the University of Melbourne, Australia.  He is also serving as the founding CEO of Manjrasoft Pty Ltd., a spin-off company of the University, commercialising innovations originating from the GRIDS Lab.  He has authored over 280 publications and three books. The books on emerging topics that Dr. Buyya edited include, High Performance Cluster Computing (Prentice Hall, USA, 1999), Content Delivery Networks (Springer, 2008) and Market-Oriented Grid and Utility Computing (Wiley, 2009).  Dr. Buyya has contributed to the creation of high-performance computing and communication system software for Indian PARAM supercomputers. He has pioneered Economic Paradigm for Service-Oriented Distributed Computing and developed key Grid and Cloud Computing technologies such as Gridbus and Aneka that power the emerging e-Science and e-Business applications.  In this area, he has published hundreds of high quality and high impact research papers that are well referenced.  The Journal of Information and Software Technology in Jan 2007 issue, based on an analysis of ISI citations, ranked Dr. Buyya's work (published in Software: Practice and Experience Journal in 2002) as one among the "Top 20 cited Software Engineering Articles in 1986-2005". He received the Chris Wallace Award for Outstanding Research Contribution 2008 from the Computing Research and Education Association of Australasia.  He is the recipient of 2009 IEEE Medal for Excellence in Scalable Computing.


Jenq-Neng Hwang Professor

University of Washington

Title : MediaNets: Application Driven Next Generation IP Networks

Thanks to the explosive creation of multimedia contents, the pervasive adoption of multimedia coding standards, the ubiquitous access of multimedia services, the existing IP network infrastructure, with its best effort packet delivery mechanism, has started to suffer from performance degradation on emerging multimedia networking applications. This inadequacy problem is further deepened by the prevalence of last/first-mile wireless networking, such as Wi-Fi (IEEE 802.11 a/b/g/n), mobile WiMAX (IEEE 802.16e), and many wireless sensors and ad-hoc networks. To overcome the quality of service (QoS) challenges, the next generation IP networks have to be architected in a top-down manner, i.e., application driven layered protocol design. More specifically, based on the application media data, compression schemes are applied, the subsequent Network, MAC and PHY layered protocols are accordingly enhanced to reach the optimal performance. This is the fundamental concept behind the design of MediaNets. In this talk, I will address the QoS challenges specifically encountered in video over heterogeneous wireless broadband networks, and propose several application driven MediaNet solutions based on effective integration of APP and MAC/PHY layers. More specifically, the cross-layer congestion control for achieving airtime fairness of video streaming to maximize the link adaptation performance of Wi-Fi, the effective content wireless broadcasting over large scale distributed surveillance camera networks or vehicular ad-hoc networks, and the opportunistic multicast of scalable video live streaming over WiMAX.

Dr. Jenq-Neng Hwang received his Ph.D. degree from the University of Southern California. In the summer of 1989, Dr. Hwang joined the Department of Electrical Engineering of the University of Washington in Seattle, where he has been promoted to Full Professor since 1999. He also served as the Associate Chair for Research & Development in the EE Department from 2003 to 2005. He has written more than 250 journal, conference papers and book chapters in the areas of multimedia signal processing, and multimedia system integration and networking, including a recent textbook on "Multimedia Networking: from Theory to Practice,"  published by Cambridge University Press. Dr. Hwang has close working relationship with the industry on multimedia signal processing and multimedia networking. He also co-founded the HomeMeeting company, one of the largest e-learning and video conferencing system developer in Taiwan, and was the main architect to design a  commercially available interactive IPTV.
Dr. Hwang received the 1995 IEEE Signal Processing Society's Best Journal Paper Award. He is a founding member of Multimedia Signal Processing Technical Committee of IEEE Signal Processing Society and was the Society's representative to IEEE Neural Network Council from 1996 to 2000. He served as an associate editor for IEEE T-SP, T-NN and T-CSVT, and is now an Associate Editor for IEEE T-IP and an Editor for JISE and ETRI.  He was the Program Co-Chair of  ICASSP 1998 and ISCAS 2009. Dr. Hwang is a fellow of IEEE since 2001.


David H.C. Du

Professor of Department of Computer Science and Engineering
University of Minnesota, Minneapolis

Title : A New Era after the Convergence of Network Centric and Data Centric Computing

The Internet today has grown to an enormously large scale. Devices large and small are connected globally from anywhere on the earth.  Therefore, we can argue that we are in a network centric era. With the rapid advancement of technology, we now also have cheap and small devices like sensors and embedded processors with high computing power and large storage capacity.  These devices are designed to improve our daily life by monitoring our environment, collecting critical data, and executing special instructions. These devices have gradually become an essential and prominent part of our Internet.  Many imaging, audio and video data are converted from analog to digital. As a result, unprecedented amount of data is collected by these devices and are available via Internet. How to manage and look for the desired information becomes a great challenge.  Therefore, we can certainly also say that we are in a data centric era. In this talk, we will examine the challenges in the convergence of both network centric and data centric computing. At the same time, many emerging applications like service-oriented, security and real-time demand much better support than the current Internet can offer. How the future Internet should look like is still undetermined.  In this talk, we will present a vision of a content addressable future Internet that requires integrating the capabilities of networking with storage devices. What are the essential changes in data representation, information retrieval, storage systems and networking design will be discussed.  We believe an object-oriented intelligent storage is an essential part of the solution to this new computing and communication environment. We will also present a number of research projects that are currently carried out in Digital technology center Intelligent Storage Consortium (DISC) at University of Minnesota. These projects include data deduplication, power management in data centers, long-term data preservation, data/storage security/privacy,  and flash memory based solid state drives.

Dr. David Du is currently the Qwest Chair Professor of Computer Science and Engineering at University of Minnesota, Minneapolis. He has served as a Program Director (IPA) at National Science Foundation CISE/CNS Division from March 2006 to September 2008. At NSF, he was responsible for NeTS (networking research cluster) NOSS (Networks of Sensor Systems) Program and worked with two other colleagues, Karl Levitt and Ralph Wachter, on Cyber Trust Program. Dr. Du received a Ph.D. degree from University of Washington (Seattle) in 1981. He joined University of Minnesota as a faculty since 1981.

Dr. Du has a wide range of research expertise including multimedia computing, mass storage systems, high-speed networking, sensor networks, cyber security, high-performance file systems and I/O, database design, and CAD for VLSI circuits. He has authored and co-authored over 200 technical papers including 95 referred journal publications in these research areas.  He has graduated 49 Ph.D. and 80 M.S. students in the last 25 years.  Dr. Du is an IEEE Fellow (since 1998) and a Fellow of the Minnesota Supercomputer Institute.  He is currently serving on the Editorial Boards of several international journals. He has also served as Conference Chair and Program Committee Chair for several major conferences in multimedia, networking, database and security areas. Currently he is the General Chair of the 30th IEEE Symposium on Security and Privacy (2009) and Program Committee Co-Chair for the 37th International Conference on Parallel Processing (2009). He has had research grants from many federal funding agencies including NSF, DARPA, ONR, and DOE. He has a strong tie with many industrial researchers and has collaborated with a number of companies including IBM, Intel, Cisco, Symantec, Seagate, Sun Microsystems, Honeywell, etc.


Yanchun Zhang
Professor of Computer Science and Director of Centre for Applied Informatics, Victoria University

Title : Web Community Mining and Analysis

Due to the inherent correlation among Web objects and the lack of a uniform schema of web documents, Web community mining and analysis has become an important area for Web data management and analysis. The research of Web communities spans a number of research domains such as Web mining, Web search, clustering and text retrieval. In this talk we will present some recent studies on this topic, which cover finding relevant Web pages based on linkage information, discovering user access patterns through analyzing Web log files, co-clustering Web objects and investigating social networks from Web data. The algorithmic issues and related experimental studies will be addressed. Some research directions are also to be discussed.

Yanchun Zhang is a full Professor of Computer Science and Director of Centre for Applied Informatics Research at Victoria University. Dr Zhang obtained a PhD degree in Computer Science from The University of Queensland in 1991. Prof. Zhang' research interests include databases, cooperative transactions management, web information systems, web mining, web services and e-research. He has published over 200 research papers in
international journals and conference proceedings including top journals such as ACM Transactions on Computer and Human Interaction (TOCHI), IEEE Transactions on Knowledge and Data Engineering (TKDE), and a dozen of books and journal special issues in the related areas.

Dr. Zhang is a founding editor and editor-in-chief of World Wide Web and the founding editor of Web Information Systems Engineering and Internet Technologies Book Series from Springer. He is Chairman of International Web information Systems Engineering Society (WISE). He is currently a member of Australian Research Council's College of Experts.


Jhing-fa Wang
Department of Electrical Engineering, 
National Cheng Kung University, Tainan, Taiwan

Title : Orange Computing & Technology : Humanism Inspired Future Computing

It is no doubt that the research & development of Science & Technology should be applied to improve the human life. For example: Edision invented light bulb, Watt invented steam engine, Franklin invented the lightning rod and Darwin developed the evolutionary theory etc. They all contributed to the humankind very much. 

Recently, Humanitarian Technology Challenge (HTC) has become one of the IEEE Mission Statement 
IEEE's core purpose is to foster technological innovation and excellence for the benefit of humanity
IEEE likes to raise their awareness of IEEE as“Global Association of professionals and academics who solve technological problems that positively impact humanity”.IEEE will bring a more systematic approach for applying technology to solve world problems, define methodologies to addressing challenge oriented, large scale efforts and identify challenges and drive solutions that are implementable and sustainable. 

The orange computing or orange technology is a new color of technology we newly define different from green technology for the equivalence to the Humanism Technology. It refers to those researches or technologies which will pay more attention on the care of the physiology (body), psychology (mind) and spirit of the human being.

For future computing it will explore the creative and new computing model including hardware software system for the application. The formal definition of computing in Wikipedia: In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; processing, structuring, and managing various kinds of information; doing scientific studies using computers; making computer systems behave intelligently; creating and using communications and entertainment media; finding and gathering information relevant to any particular purpose, and so on. 

In this talk, based on the above statements, the following topics will be addressed:

1. Orange Technology vs. Green Technology
2. Orange Technology & Humanism 
3. Related Research Efforts in the World
4. Orange Technology & Future Computing
5. Research Trend for Orange Technology

Prof. Jhing-fa Wang is currently a Chair Professor in the Department of Electrical Engineering, National Cheng Kung University (NCKU). He got his bachelor and master degree from NCKU in Taiwan and Ph. D. from Stevens Institute of Technology USA in 1973, 1979 and 1983 respectively. He is now also the chairman of Tainan Section, IEEE. He was elected as IEEE Fellow in 1999 for his contribution on:"Hardware and Software Co-design on Speech Signal Processing".

He received Outstanding Research Awards and Outstanding Researcher Award from National Science Council in 1990, 1995, 1997, and 2006 respectively. He also received Outstanding Industrial Awards from ACER and Institute of Information Industry and the Outstanding Professor Award from Chinese Engineer Association, Taiwan in 1991 and 1996 respectively. He also received the culture service award from Ministry of Education, Taiwan in 2008 and Distinguished Scholar Award of KT Li from NCKU in 2009.

Prof. Wang was also invited to give the Keynote Speeches in PACLIC 12 in Singapore, 1998, UWN 2005 in Taipei , WirlessCom 2005 in Hawaii,IIH-MSP2006 in Pasadena,USA , ISM2007 in Taichung and  PCM 2008 in Tainan respectively. He also served as an associate editor on IEEE Transactions on Neural Network and IEEE Transactions on VLSI System and Editor in Chief on International Journal of Chinese Engineering from 1995 to 2000.

He is now leading a research team sponsored by Ministry of Economic Affairs (MOEA) for the research on advanced multimedia technology on human-centric digital life at the grant about one million USA dollars.

Prof. Wang's research areas are on multimedia signal processing including speech signal processing, image processing, and VLSI system design. Concerning about the publication, he has published about one hundred journal papers on IEEE, SIAM, IEICE, IEE and about two hundreds international conference papers.