Spring Seminars 2011

 

Microwave Remote Sensing of Surface and Deep Soil Moisture
Using Forward and Inverse Electromagnetic Scattering Models
Alireza Tabatabaeenejad
University of Michigan
Monday, May 2, 2011, 3:35 pm, Marston 132
Host: Steve Frasier

Forward and inverse electromagnetic scattering models are powerful and indispensable tools for imaging and remote sensing not only in the Earth sciences and space exploration, but also in medicine:  Exploration geophysics has been of special interest to the oil and gas industry. The need to study global climate change has motivated missions such as NASA's Soil Moisture Active and Passive (SMAP), which will globally monitor soil moisture and freeze-thaw state of the Earth with unprecedented resolution using microwave remote sensing techniques. Remote sensing of water on planets such as Mars, where existence of water is suspected deep below the surface, has attracted the attention of the space science community in its quest for understanding the origins of life. Biomedical imaging has become a high priority research area for and within institutes concerned with medical research.

This talk will present a systematic study of retrieval of soil moisture of bare and forested landscapes using forward and inverse electromagnetic scattering models. First, two 3D analytical scattering models, namely small perturbation method (SPM) and Kirchhoff approximation (KA) will be presented for calculation of radar backscatter from layered rough surfaces. The focus will then shift to inversion of the corresponding model parameters, leading to the retrieval of surface and subsurface soil moisture, and, in case of a forested area, vegetation parameters. The main goal is to retrieve soil moisture from radar data, but it will be shown that other model parameters of possible interest may also be retrieved with great accuracy. Performance of the inversion algorithm, which is based on a global optimization scheme known as simulated annealing, is evaluated for several scenarios that are encountered in practice. Dependency of inversion accuracy on modeling errors, number and value of measurement frequencies, and forest allometric relations will be discussed. Validation results are presented using data acquired with the NASA/JPL UAVSAR in June 2010 over the boreal forests in central Canada, in support of the pre-launch calibration and validation activities of SMAP mission. Techniques presented here are planned to be used in the upcoming Earth Ventures-1 (EV-1) mission Airborne Microwave Observatory of Subcanopy and Subsurface (AirMOSS) for retrieval of root-zone soil moisture using P-band synthetic aperture radar data.

Alireza Tabatabaeenejad received the B.S. degree from Sharif University of Technology, Iran, and the M.S. and Ph.D. degrees from the University of Michigan, Ann Arbor, in 2001, 2003, and 2008, respectively, all in electrical engineering. He has been a Postdoctoral Research Fellow at the University of Michigan’s Radiation Laboratory since December 2008. His research interests are microwave remote sensing, applied and computational electromagnetics, and applied and computational mathematics. Particularly, he is interested in development and applications of forward and inverse electromagnetic scattering models for the Earth and space sciences applications. Dr. Tabatabaeenejad has been involved in pre-launch modeling and validation activities for NASA’s SMAP mission since 2009 and in NASA’s EV-1 AirMOSS mission since January 2011. He is the recipient of Young Scientist Award from the International Union of Radio Science (URSI) General Assembly in 2005.

 

Algorithms for High-Dimensional Inference: Analyses and Applications
Dr. Alyson Fletcher
University of California, Berkeley
Thurs., April 28, 2011, 4-5PM, Marston 132
Host: Patrick Kelly

Sparsity has emerged as a central feature of a number of statistical inference problems in fields including image processing, machine learning, biology, and compressed sensing.  The prominence of sparsity in these problems raises questions of how sparse or low-dimensional structures can be inferred from data, how current algorithms perform, and what are the ultimate limits for sparsity-based estimation?

In this talk, I will provide surprisingly sharp answers to some of these questions in a high-dimensional setting. I will present new scaling laws on the sample complexity of sparsity detection algorithms that precisely bound the gap between the information-theoretic limit for sparse detection and the achievable performance with practical algorithms.  I will also present a novel analysis based on the replica method from statistical physics that can provide an exact characterization of the asymptotic performance of a large class of estimators under large random measurements. The replica analysis enables the first general and precise description of the behavior of several widely-used compressed sensing reconstruction methods, including lasso.

The replica analysis is also closely connected to recently-developed approximate message passing (AMP) algorithms based on Gaussian  approximations of loopy belief propagation.  I will present an extension of the AMP algorithm that can incorporate both nonlinear output channels and group sparsity.  This algorithm can then be applied to the problems of neural decoding and neural mapping -- where the synaptic weights between neurons is to be estimated from recordings under multi-neuron excitation.  A generalization of the algorithm for certain nonlinear parametrizations is also developed for the problem of image recovery in multi-coil MRI.

Alyson Fletcher received the M.S. and Ph.D. in electrical engineering and the M.S. in mathematics from the University of California, Berkeley. She is a recipient of a University of California President's Postdoctoral Fellowship, the UC-Berkeley EECS Lawler Award, and a Luce fellowship.  Her research interests include signal processing, machine learning, dynamical systems and optimization, computational neuroscience, and medical imaging.

 

ABC: An Academic Industrial-Strength Verification Tool
Alan Mishchenko
UC Berkeley
Wed., April 27, 2011, 3:35-4:35pm, Marston 132
Host: Maciej Ciesielski (CSE)

ABC is a public-domain system for logic synthesis and formal verification of binary logic circuits appearing in synchronous hardware designs. ABC combines scalable logic transformations based on And-Inverter Graphs (AIGs), with a variety of innovative algorithms. A focus on the synergy of sequential synthesis and sequential verification leads to improvements in both domains. This talk introduces ABC, motivates its development, and illustrates its use in formal verification.

Dr. Alan Mishchenko graduated from Moscow Institute of Physics and Technology (Moscow, Russia) in 1993 and Glushkov Institute of Cybernetics (Kiev, Ukraine) in 1997. From 1998 to 2002 he was an Intel-sponsored researcher at Portland State University. In 2002, he joined the EECS Department at UC Berkeley, where he is currently an associate researcher at Berkeley Verification and Synthesis Research Center (BVSRC). Alan is interested in developing efficient methods for synthesis and verification.

 

Land Applications of Microwave Remote Sensing
Haroon Stephen
University of Nevada, Las Vegas
Tues., April 26, 2011, 11:00am, Gunness Student Center
Director GIS and Remote Sensing Core lab
Host: Steve Frasier

Microwave remote sensing can measure properties that are sensitive to the physical, electrical, and thermal characteristics of Earth’s surface. These properties provide insight into geophysical processes and help us understand the spatial and temporal interconnections of the regional and global dynamics of land, water, and climate.

This presentation focuses on the relationships between spaceborne microwave data and land surface geometrical, dielectric, and thermal characteristics. These relationships are demonstrated through modeling of radar backscatter and radiometric temperature of sand dunes, vegetation, and water. The relationship of microwave observations to the surface geometrical characteristics will be shown through the modeling of microwave scattering and emission from sand dunes. The relationship to the surface dielectric characteristics will be shown through the modeling of backscatter dependence on soil moisture and water level in arid lands and wetlands, respectively. The relationship to the thermal characteristics will be shown through the modeling of radiometric temperature dependence on solar insolation. The benefits of microwave remote sensing techniques will be presented through land surface research applications.

Dr. Haroon Stephen received his B.S. degree in Agricultural Engineering in 1995 from the University of Agriculture Faisalabad, Pakistan; an M.S. degree in Remote Sensing and GIS in 1997 from the Asian Institute of Technology, Bangkok, Thailand; and a Ph.D. degree in Electrical and Computer Engineering in 2006 from Brigham Young University, Provo, Utah. He joined University of Nevada, Las Vegas (UNLV) Water Resources Lab in 2007 as a postdoctoral researcher. Currently, he is serving at UNLV as the Director of the GIS and Remote Sensing Core Lab.

Dr. Stephen has diverse research experience in the areas of Remote Sensing, GIS, and GPS applications. His Ph.D. research involved the modeling of microwave scattering and emission behavior of electromagnetic waves over Saharan sand surfaces and Amazon vegetation. His ongoing research interests include applications of remote sensing and GIS technologies to water resource mapping, drought study, and climate change study. Presently, he is involved in several Federal and State sponsored research projects involving geospatial data research and applications. He is also developing a geovisualization facility at UNLV that will provide state-of-the-art visualization for the research and educational needs of UNLV and the region.

 

Moore's Law
Stanley Mazor
Monday, April 25, 2011, 3:35-4:45pm, Marston 132
Host: Maciej Ciesielski (CSE)

Moore's law is in the news and is a way of describing the historic growth in density of integrated circuits. Several of the factors that contributed to the increased density of IC's during 1960-2000 are described, some of which are applicable today. Additionally, the influence of chip density on early microprocessor architecture and design, particularly the "LSI constraints" which confronted the chip designer are discussed. Mr. Mazor will describe his experiences in the design of the early Intel chips.

Stanley Mazor worked on early microprocessor chips at Intel and shares patents on the 4004 and 8080. Previously he worked on the design of "Symbol", a high level language computer at Fairchild R&D (1964). He has worked in several start-up companies including:

Intel, BEA Systems, Synopsys, Silicon Compilers, and Numerical Technologies,and Cadabra in Ottawa. He studied mathematics at San Francisco State University in 1963. He has published 50 articles relating to LSI chips and five books including "A Guide to VHDL", published by Kluwer in 1993.

He was awarded the Kyoto Prize, the Ron Brown American Innovator Award, inducted into the Inventor's Hall of Fame, awarded the SIA Robert Noyce Award,  the President's National Technology and Innovation Award, and is a Fellow of the Computer History Museum. His hobby is architecture and recently published books entitled: "Design an Expandable House", and "Stock Market Gambling".


Automated Summarization of Hyperspectral Images: Application to Mars
Mario Parente
Department of Geological Sciences
Brown University
April 22, 2011, 11:15 am, Gunness Student Center
Host: Steve Frasier

Terrestrial and planetary remote sensing systems such as the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) can benefit from the use of automatic approaches and statistical learning techniques due to the amount of data involved. Thanks to a high sensor resolution, CRISM data volumes overwhelm scientists capacity for exhaustive manual analysis. The analysis of such a dataset would benefit from an automated process that could identify the unique spectral signatures present in a scene (image endmembers) and store them for further examination or interpretation. If installed aboard an orbital system, such a tool could relieve transmission constraints for high-bandwidth hyperspectral datasets by giving priority to the most informative data products. In this talk, I will introduce an algorithm that extracts image endmembers from hyperspectral CRISM observations, which can be used as the scene concise mineralogical representation for cataloging purposes, in addition to existing browse products and parameter maps.

The spectral summarization procedure has several stages. The algorithm focuses on smaller sub-images and operates separately on each one. The steps that follow leverage the representation of the hyperspectral image as a cloud of pixel vectors (points) in a space of dimension equal to the number of spectral channels. A nonlinear dimensionality reduction technique enhances dissimilarities between points while preserving local features in their spectral shapes. The approach produces a representation of the data with well separated components that are successively identified as natural clusters by a spectral clustering technique. Each cluster or spectral "family" is further analyzed by an unmixing algorithm that describes the roughly convex shapes of the regions with polygons whose vertexes are local endmembers. Since a corner of a cluster can lie in the interior of the data cloud the procedure screens the candidate spectra based on a spectral similarity score, to extract global endmembers.

The technique has been demonstrated to effectively capture the spectral variability in several well characterized CRISM images and is resilient to spectrometer noise. The most effective feature of the algorithm is its ability to capture subtle differences in the spectral shape of surface components (e.g. small differences in absorption band positions). The algorithm also outperformed several other state-of-the art automated approaches on image endmember extraction tasks. This software is undergoing extensive validation aimed at confirming that the proposed method can be used pervasively and reliably in the summarization of the whole CRISM database. Similar methods have also been proposed to the Moon Mineralogy Mapper and the OMEGA teams as tools to aid researchers in Lunar and Martian hyperspectral image analysis.

Mario Parente received the B.S. and M.S. (summa cum laude) degrees in telecommunication engineering from the University of “Federico II” of Naples, Italy, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, with a minor in Statistics.  Dr. Parente is currently a post-doctoral Research Associate in the department of Geological Sciences at Brown University. His research involves combining physical models and statistical techniques to address issues in remote sensing of Earth and planetary surfaces. His interests include identification of ground composition, geomorphological feature detection and imaging spectrometer data modeling, reduction and calibration.

Dr. Parente is a member of the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and the Moon Mineralogy Mapper (M3) research teams. He has developed several algorithms for unsupervised statistical learning of multispectral and hyperspectral images, texture recognition and simulation of the imaging chain for CRISM and M3. He also has developed techniques for signal denoising and reduction for the official CRISM and M3 spectrometer data processing software.

Dr. Parente received the IEEE Geoscience and Remote Sensing Society best paper award in 2009 at the WHISPERS conference in Grenoble, France. He has participated as a co-I in NASA funded projects and is currently leading a Mars Data Analysis Program proposal selected by the NASA panel and awaiting funding availability.

 

Tomography over Graphs: a Sparse Inference Framework
Weiyu Xu
Cornell University
Thur., April 21, 2011, 11:15am-12:15pm, Gunness Student Center
Host: Patrick Kelly (COMMUNICATIONS and SIGNAL PROCESSING)

Our society has been, and will also be, more and more depending on large scale integrated sensing, communication and computation systems, such as the Internet and sensor networks. For example, a large volume of data are generated over the Internet and sensor networks every day.  On the one hand, these ever expanding networked information systems keep us more informed, more connected and help us make intelligent decisions. On the other hand, the large scales of these complex networked systems and their operation constraints are making it harder for us to directly probe their structures and states. It is not rare that we can only acquire limited information about these networked systems, from which we attempt to learn the full system states. The area of network tomography is devoted to making such inference.

In this talk, we will present a general framework for inference in large scale networked systems from limited observations and incomplete information. We consider the problem of inferring the link characteristics (such as the communication delay or packet loss rate over a link) of a network through indirect end-to-end topology-constrained path observations.  We show that for a sufficiently connected graph with n nodes, we are able to learn the states of the network by using only O(k log(n)) end-to-end path observations, if only k unknown links of the graph have abnormal link characterisitcs. We further demonstrate that certain computationally efficient message passing and convex optimization algorithms can provide theoretical guarantees in inferring the states of the network from limited observations. Our recent results on explicit constructions of the working observation paths will also be discussed.

This work establishes interesting connections between networking, signal processing, graph theory and coding theory. At the end of this talk, I will also briefly discuss future research directions and topics.

Weiyu Xu received his Bachelor’s degree in information engineering from the Beijing University of Posts and Telecommunications in 2002,  the Master’s degree in electronic engineering from Tsinghua University in 2005  and the Ph.D. degree in electrical engineering, with a minor in applied and computational mathematics, from California Institute of Technology in 2009. Since September 2009, he has been a postdoctoral associate in the networks group at Cornell University. His research interests are in networking science and systems, communication, and signal processing, with an emphasis on comressive sensing, detection and estimation, coding and information Theory. Dr. Xu received the Charles and Ellen Wilts best dissertation prize for his research in compressive sensing and low-rank matrix recovery.

 

A Novel Ku-Band Radiometer/Scatterometer Approach for Improved Oceanic Wind Vector Measurements
Suleiman Alsweiss
University of Central Florida
Wed., April 20, 2011, 3:35 PM, Marston 132
Host: Steve Frasier (ELECTROPHYSICS)

A conceptual conical-scanning radiometer/scatterometer (RadScat) instrument design for the purpose of improving satellite ocean vector wind retrievals under rain-free conditions will be presented. This technique combines the wind vector signature in the passive linearly polarized ocean brightness temperatures with the anisotropic signature of multi-azimuthal radar cross-sectional measurements to retrieve improved oceanic surface wind vectors. The performance of the RadScat is evaluated using a Monte Carlo simulation based on actual measurements from the SeaWinds scatterometer and the Advanced Microwave Scanning Radiometer onboard the Advanced Earth Observing Satellite II. The results demonstrate significant improvements in wind vector retrievals, particularly in the near-subtrack swath, where the performance of conical-scanning scatterometers degrades.

Suleiman Alsweiss received the B.Sc. degree in electrical engineering from Princess Sumaya University for Technology (PSUT), Amman, Jordan, in 2004, the M.S. degree in electrical engineering from University of Central Florida (UCF), Orlando, in 2007, and currently he is a Research Assistant with the Central Florida Remote Sensing Laboratory (CFRSL) at UCF working toward the Ph. D degree.                                                  

In spring 2006, he joined the academic program at UCF; he has followed the Communications Systems track with a specialization in microwave remote sensing, both active and passive. His primary research focused on developing algorithms to retrieve accurate ocean surface vector winds using active and passive satellite data, especially in extreme weather events. 

Moreover, he took role in pre- and post-launch instruments calibration, and new instruments simulation experiments for performance evaluation. He had the opportunity to work with several space-borne instruments through funded projects such as QuikSCAT, ADEOS-II (SeaWinds and AMSR), WindSat, HIRAD, MWR on Aquarius SAC-D, and the Dual Frequency Scatterometer (DFS).

 

Spectral-spatial classification of hyperspectral remote sensing data
Yuliya Tarabalka
NASA Goddard Space Flight Center
Thurs., April 14, 2011, 11:00 am, Gunness Student Center
Host: Steve Frasier

Hyperspectral imaging provides rich spectral information for every pixel in a particular scene, hence increasingthe ability to distinguish physical structures in the scene. However, a large number of spectral channels presents challenges for image classification. While pixelwise classification techniques process each pixel independently without considering information about spatial structures, further improvements can be achieved by the incorporation of spatial information in a classifier, especially in areas where structural information is important to distinguish between classes.

 In this talk, we will present novel strategies for spectral-spatial classification of hyperspectral images. One of the recently proposed approaches consists of performing image segmentation in order to use every region from the segmentation map as an adaptive spatial neighborhood for all the pixels within this region. We will discuss different segmentation techniques and their use for hyperspectral images. We will also explore approaches of reducing oversegmentation in an image, which is achieved by automatically marking the spatial structures before performing a marker-controlled segmentation. We will discuss different approaches for marker selection and for marker-controlled region growing. We will show that the new techniques improve classification accuracies and provide classification maps with more homogeneous regions, when compared to previously proposed methods.

Dr. Yuliya Tarabalka received the B.S. degree in computer science from Ternopil Ivan Pul'uj State Technical University, Ukraine, in 2005, M.Sc. degree in signal and image processing from the Grenoble Institute of Technology (INPG), France, in 2007, the Ph.D. degree with European Honours in signal and image processing from INPG, and the Ph.D. degree in electrical engineering from the University of Iceland, in 2010.

From July 2007 to January 2008, she was a Researcher with the Norwegian Defence Research Establishment, Norway. She is currently Postdoctoral research fellow at the NASA Goddard Space Flight Center, Greenbelt, MD, USA. Her research interests are in the areas of remote sensing, imaging spectroscopy, image analysis and signal processing, machine learning and high-performance computing. She is Member of the IEEE Society and the IEEE Geoscience and Remote Sensing Society.

 

Shannon Theory for Compressed Sensing: Fundamental limits and Optimal Phase Transition
Yihong Wu
Princeton University
Wednesday, April 13, 2011, 3:35PM, Marston 132
Host: Patrick Kelly (COMMUNICATIONS and SIGNAL PROCESSING)

Current data acquisition methods are often extremely wasteful: massive amounts of data are acquired then discarded by a subsequent compression stage. In contrast, a recent approach known as compressed sensing, combines the data collection and compression and reduces enormously the volume of data at much greater efficiency and lower costs. A paradigm of lossless encoding of analog sources by real numbers rather than bits, compressed sensing deals with efficient recovery of sparse vectors from the information provided by linear measurements. However, partially due to the non-discrete nature of the problem, none of the existing models allow a complete understanding of the theoretical limits of the compressed sensing paradigm.

As opposed to the conventional worst-case (Hamming) approach, in this talk we introduce a statistical (Shannon) framework for compressed sensing, where signals are modeled as random processes rather than individual sequences. This framework encompasses more general signal models than sparsity. The key problem is to find the minimum number of measurements per sample that guarantees perfect or robust reconstruction. We show that the information dimension and MMSE dimension of the input signal are the fundamental limits of measurement rate for the noiseless and noisy compressed sensing, respectively. These information measures also serve as optimal phase transition thresholds for the reconstruction error probability and noise sensitivity. We also discuss the suboptimality of practical reconstruction algorithms and implications on developing optimal schemes.

Yihong Wu is a Ph.D. candidate in the Department of Electrical Engineering at Princeton University. He received the B.E. degree and the M.A. degree from Tsinghua University in 2006 and Princeton University in 2008 respectively, both in electrical engineering. He is a recipient of the Princeton University Wallace Memorial Honorific Fellowship in 2010. His research interests are in information theory, signal processing, mathematical statistics, approximation theory, and distributed algorithms.

 

 

A Theory of Privacy and Utility for Electronic Data Sources
Dr. Lalitha Sankar
Princeton University
Monday, April 11, 2011, 3:35PM, Marston 132
Host: Patrick Kelly (COMMUNICATIONS and SIGNAL PROCESSING)

The concomitant emergence of myriad centralized searchable data repositories has made "leakage" of private information an important and urgent societal problem. This problem drives the need for an overarching analytical framework that can quantify the safety of personally identifiable information (privacy) while still providing a measurable benefit (utility) to multiple legitimate information consumers. State of the art approaches have predominantly focused on privacy. My work presents the first information-theoretic approach which promises an analytical framework guaranteeing tight bounds on how much utility is possible for a given level of privacy and vice-versa. I will first show how rate distortion theory is a natural choice to develop such a framework which includes the following: modeling of data sources, developing application independent utility and privacy metrics, quantifying the largest utility-privacy tradeoff region, developing a side-information model for dealing with questions of external knowledge, and studying a successive disclosure problem for multiple query data sources. I will then introduce and define privacy in the context of cyber-physical systems, specifically the smart grid, and introduce two unique problems: first, competitive privacy at the network level as a result of collaboration and competition amongst electricity suppliers and second, end-user privacy resulting from the deployment of smart meters and the consequent risk to personal privacy. Finally, these privacy problems are driving the need for new theoretical models and analysis: I will briefly discuss two new privacy constrained source coding problems that I am investigating.

Lalitha Sankar received the B.Tech degree from the Indian Institute of Technology, Bombay, the M.S. degree from the University of Maryland, and the Ph.D degree from Rutgers University in 2007. Prior to her doctoral studies, she was a Senior Member of Technical Staff at AT&T Shannon Laboratories. She is currently a Research Scholar at Princeton University. Her research interests include wireless communications, information privacy and secrecy, and network information theory. Dr Sankar was a recipient of a Science and Technology Postdoctoral Fellowship from the Council on Science and Technology at Princeton University during 2007-2010. For her doctoral work, she received the 2007-2008 Electrical Engineering Academic Achievement Award from Rutgers University. She is a co-PI on a three-year NSF CCF grant on database priva

 

Signal Recovery from Randomized Measurements Using Structured Sparsity Models
Dr. Marco F. Duarte
Duke University
Friday, April 8, 2011, 11:15AM, Gunness Student Center
Host: Patrick Kelly (COMMUNICATIONS and SIGNAL PROCESSING)

We are in the midst of a digital revolution spawned by the proliferation of sensing devices with ever increasing fidelity and resolution. The resulting data deluge has motivated compression schemes that rely on transform coding, where a suitable transformation of the data provides a sparse representation that compacts the signal energy into a few transform coefficients. This standard approach, however, still requires signal acquisition at the full Nyquist rate, which cannot be achieved in many emerging applications using current sensing technology. The emerging acquisition paradigm of compressive sensing (CS) leverages signal sparsity for recovery from a small set of randomized measurements. The standard CS theory dictates that robust recovery of a K-sparse, N-length signal is possible from M=O(K log(N/K)) measurements. New sensing devices that implement this measurement process have been developed for applications including imaging, communications, and biosensing.

In this talk, we show that it is possible to substantially decrease the number of measurements M without sacrificing robustness by leveraging more concise signal models that go beyond simple sparsity and compressibility. We present a modified CS theory for structured sparse signals that exploits the dependencies between values and locations of the significant signal coefficients; we provide concrete guidelines on how to create new recovery algorithms for structured sparse signals with provable performance guarantees that require as few as M=O(K) measurements. We also review example applications of structured sparsity for natural images, signal ensembles, and multiuser detection.

Marco F. Duarte received the B.Sc. degree in computer engineering (with distinction) and the M.Sc. degree in electrical engineering from the University of Wisconsin-Madison in 2002 and 2004, respectively, and the Ph.D. degree in electrical engineering from Rice University, Houston, TX, in 2009. During 2009-2010, he was a Visiting Postdoctoral Research Fellow in the Program of Applied and Computational Mathematics at Princeton University. He is currently the NSF/IPAM Mathematical Sciences Postdoctoral Research Fellow in the Department of Computer Science at Duke University.

Dr. Duarte received the Rice University Presidential Fellowship and the Texas Instruments Distinguished Fellowship in 2004, and the Hershel M. Rich Invention Award in 2007 for his work on the single pixel camera. He was a coauthor on a paper with Chinmay Hegde and Volkan Cevher that won the Best Student Paper Award at the 2009 International Workshop on Signal Processing with Adaptive Sparse Structured Representations (SPARS). His research interests include compressive sensing, low-dimensional signal models, dimensionality reduction, and distributed signal processing. 

 

The Impact of CMOS Technology on RF Circuit Design
Joseph Bardin
University of Massachusetts Amherst/ ECE Department
Friday, April 1, 2011, 11:15AM-12:05PM, Gunness Student Center
Host: Do-Hoon Kwon (ELECTROPHYSICS)

The first MOS transistor was fabricated at Bell Labs in 1960 and the first full CMOS process was demonstrated by Fairchild Semiconductor just three years later. However, it was not until the nineties that the GHz frequency range CMOS circuits started appearing out of academic research labs. Today, high frequency RF circuits are all around us in our iPhone’s, Wi-Fi cards, Bluetooth peripherals, medical devices, and so on. The extent to which this technology has changed the way we live our lives in just the last ten years alone is quite profound.  

In this survey presentation, we will study the impact of CMOS technology on RF circuit design. The presentation will begin with a brief history of CMOS devices and the rapid industry-driven growth of the underlying technology platforms. We will then explore the impact that the emergence of sub-micron and then nanometer CMOS technologies has had upon RF technology. Several real world examples will be presented. Finally, the presentation will conclude with a discussion of some emerging technologies that are expected to gain traction within the next decade.

Professor Bardin received the Ph.D. degree in Electrical Engineering from California Institute of Technology, the M.S.E.E degree from UCLA, and the B.S.E.E degree from UCSB in 2009, 2005 and 2003, respectively. He was with the Jet Propulsion Laboratory from 2003-05 and was a postdoctoral researcher in the Caltech High-Speed Integrated Circuits Laboratory from 2009-10.  Since October 2010, he has been with the Department of Electrical and Computer Engineering at the University of Massachusetts, Amherst, where he is currently an Assistant Professor.

 

A Metal-Only Reflectarray Antenna for mm-Wave Applications
Yong Heui Cho
Mokwon University, Korea
Monday, March 28, 2011, 3:35-4:25PM, Marston 132
Host: Do-Hoon Kwon (ELECTROPHYSICS)

In this talk, we will discuss the large antenna technology for millimeter-wave applications. A reflectarray antenna is a good candidate for millimeter-wave applications including the broadband radio links for backhaul networking of cellular base stations and the FOD (Foreign Object Debris) detection for runways. Even though a reflectarray has been mainly fabricated on dielectric substrate, a metal-only reflectarray based on stacked metallic sheets or metallized plastic moldings is the low-cost promising technology for millimeter-wave frequency bands. The metal-only reflectarray has simple structure, ease of manufacturing, high power capability, and suitable radiation characteristics. In addition, we will briefly investigate the electromagnetic scattering theory for a metal-only reflectarray and compare the developed codes with commercial softwares based on conformal finite difference time domain (FDTD) method.

Yong Heui Cho received the B. S. degree in Electronics Engineering from the Kyungpook National University, Daegu, Korea, in 1998, the M. S. and Ph. D. degrees in Electrical Engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2000 and 2002, respectively. From 2002 to 2003, he was a Senior Research Staff with the Electronics and Telecommunications Research Institute (ETRI), Daejeon, Korea. In 2003, he joined the School of Information and Communication Engineering, Mokwon University, Daejeon, Korea, where he is currently an Associate Professor. His research interests include dispersion characteristics of waveguides, electromagnetic wave scattering, and reflectarray design.

 

A Decade of Exploration in Network Algorithmics
Jim Xu
Georgia Institute of Technology
Friday, March 25 2011, 11:15am, Gunness Student Center
Host: Lixin Gao (CSE)

 "Network algorithmics" are techniques and principles behind the software and hardware systems running on high-speed Internet routers and measurement and security devices. This research is important because network link speeds have increased exponentially over the past two decades to accommodate rapidly growing number of Internet users and applications, and sophisticated network algorithmics are needed to forward, measure, monitor, and secure traffic streams at such speeds. In this talk, I am going to offer some samples from my network algorithmics research in the past decade. 

Jun (Jim) Xu is an Associate Professor in the College of Computing at Georgia Institute of Technology. He received his Ph.D. in Computer and Information Science from The Ohio State University in 2000. His current research interests include data streaming algorithms for the measurement and monitoring of computer networks and hardware algorithms and data structures for high-speed routers. He received the NSF CAREER award in 2003, ACM Sigmetrics best student paper award in 2004, and IBM faculty awards in 2006 and 2008. He was named an ACM Distinguished Scientist in 2010.

 

Dependable Architectures for Large Scale Multicores
Omar Khan
Electrical Engineering and Computer Science at MIT
Friday, March 11, 2011, 11:15a -12:05p, Gunness 64
Host: Sandip Kundu (CSE)

Semiconductor technology scaling has enabled the integration of an exponentially increasing number of transistors per die. However, due to shrinking device sizes, manufacturing imperfections, and the need for low voltage operation for energy efficiency, device reliability and occurrence of hardware errors pose a serious dependability challenge. Dependable (also known as fault-tolerant) architectures generally fall in two categories: mechanisms for error detection/isolation followed by correction/recovery. Hardware errors can occur at the time of manufacturing or in the field at runtime, making the process of detecting and recovering from errors a challenging task.

In this talk, I will discuss a number of novel approaches for tolerating permanent (or hard) errors in multicores. The Thread Relocation architecture exploits the natural inter-core redundancy of multicores to enable tolerance to hardware errors. The proposed mechanism exposes the computational demands of threads to a runtime software layer, which uses the hardware profiled information to initiate thread relocation to match the computational demands of threads to the capabilities of the fully functional and degraded cores.

Future multicores also face a serious dependability challenge in the "uncore" where complex memory subsystem connects the cores via on-chip networks. Cache coherence lies at the core of functionally correct operation of shared memory multicores. I will discuss the current and future research directions for the Dependable Cache Coherence (DCC) architecture that combines the traditional directory protocol with a novel execution-migration (EM) protocol to ensure dependability that is transparent to the programmer. The DCC architecture allows these independent and architecturally redundant coherence protocols to co-exit in the hardware and ensure dependable operation of cache coherence.

Omer Khan is a Research Scientist of Electrical Engineering and Computer Science at MIT. Omer holds a Ph.D. (2010) in Electrical and Computer Engineering from the University of Massachusetts Amherst, and a bachelor's from Michigan State University (2000). His teaching and research interests include computer architecture, hardware/software co-design, and VLSI. Omer also has more than seven years of industry experience at Motorola/Freescale and Intel. He has published over 20 papers in the architecture and system conferences and journals. He is a member of the IEEE.

Thin-stream interactive applications over reliable transport protocols: Observations and challenges
Andreas Petlund
University of Oslo / Simula Research Laboratory
Wed., March 9, 2011, 3:35pm, Marston 132
Host: Mike Zink (CSE)

A large number of network services rely on IP and reliable transport protocols. For applications that consume their bandwidth share completely, loss is handled satisfactorily, even for latency-sensitive applications. When applications send small packets intermittently, on the other hand, they are spuriously experiencing extreme latencies before packets are delivered to the receiving application. Many of these thin-stream applications are also time-dependent, in which case even spuriously experienced delay has severe consequences for the experienced quality. It has been shown for TCP that these shortcomings are caused by the way it handles retransmissions when the thin-stream packet transmission characteristics do not properly trigger the appropriate mechanisms. Analysis of scenarios where thin streams compete with greedy streams for the resources over a bottleneck also show that the thin streams do not get their fair share of the resources.

To address these shortcomings, we have developed backwards-compatible, sender-side-only modifications that reduce the application-layer latency even for receivers with unmodified TCP implementations. We implemented the mechanisms as modifications to the Linux kernel TCP stack and tested them against 4 major operating systems. Two of the presented modifications have been included in the standard Linux kernel since version 2.6.34. The enhancements can individually be enabled per application and are then used when the kernel identifies a stream as thin. To evaluate performance, we conducted a wide range of tests over various real and emulated networks. The analysis of the performed tests shows that the probability of experiencing high latencies is greatly reduced for thin streams when applying the modifications. Our work shows that it is advisable to handle thin streams separately in order to reduce latency for interactive applications, but there are still many open questions regarding the means that should be applied to improve this situation.

Andreas Petlund is a PostDoc at Department of Informatics, University of Oslo and at Simula Research Laboratory. He received his M.Sc. in 2005 and PhD in 2009, both at the University of Oslo. His main research interests include network protocol optimization for time-dependant thin streams, operating systems optimisations and hardware offloading.

 

A New Approach to Distributed Parallel Simulation
Seiyang YANG
Pusan University, South Korea
Monday, Feb. 7, 2011, 4:00 pm, Marston 132
Host: Maciej Ciesielski (CSE)

This talk describes an efficient solution to parallel event-driven simulation of digital designs described in a hardware

This talk describes an efficient solution to parallel event-driven simulation of digital designs described in a hardware description language (HDL). Simulation speedup offered by current HDL distributed simulation methods is seriously limited by the synchronization and communication overhead between the simulators. This talk introduces two radically different approaches to parallel simulation for gate- level designs, aimed at completely eliminating such communication and synchronization overhead.

The first approach is based on a new concept of temporal parallel simulation: in contrast to traditional, spatially-distributed simulation, which partitions the design into multiple modules to be simulated concurrently, the temporal parallel simulation partitions the single simulation run into multiple simulation slices in temporal domain. With each slice being independent from each other, an almost linear speedup is achievable with a large number of simulation nodes. Experimental results demonstrate that linear speedup is possible for large designs and long simulation runs.

The second approach is based on a different concept of spatial parallelism using accurate prediction of input and output signals of individual modules, derived from a model at a higher abstraction level. Using the predicted rather than actual signal values makes it possible to eliminate the need for communication and synchronization between the simulators. Each local simulation compares the actual input with the predicted input, and if the number of matches exceeds a predetermined threshold, the simulation is switched back to the prediction phase. The proposed method is applicable to massively parallel computing platforms and can work with any commercial event-driven HDL simulator.

Seiyang Yang is professor at Pusan National University, Pusan, Korea, Computer Engineering Department. He received his Ph.D. at the University of Massachusetts, Amherst, in 1990. From 1990 to 1991 he was a Senior Member of Technical Staff at MCNC (Microelectronics Center of North Carolina), Research Triangle Part, North Carolina. From 1991 to 1997 he participated in the development of the first Korean logic synthesis system, LODECAP (Logic Design CAPture). He is a founder of SEVITS Technology, Inc. and served as its CTO from 1998 to 2003. During that period, he had invented an innovative FPGA debugging system called MagicDebugger. Since 2002 his research interest has focused on simulation-based verification and debugging. He has invented a fast signal dumping method and implemented it with his graduate students as a CAD tool, recently commercialized by a Korean EDA startup and adopted by Samsung Electronics Corp. He has published a number of research papers, and held a number of US and Korean Patents in the fields of logic verification and synthesis.