Skip to content Skip to navigation

2017

Spring 2017

Jan. 20

Spring 2017

Jan. 20

Advanced Database Techniques for Scientific Data Processing, Weijie Zhao, UC Merced

Abstract

Scientific applications are generating an ever-increasing volume of multi-dimensional data that are largely processed inside distributed array databases and frameworks. Similarity join is a fundamental operation across scientific workloads that requires complex processing over an unbounded number of pairs of multi-dimensional points. In this talk, we introduce a novel distributed similarity join operator for multi-dimensional arrays. Unlike immediate extensions to array join and relational similarity join, the proposed operator minimizes the overall data transfer and network congestion while providing load-balancing, without completely repartitioning and replicating the input arrays. We define formally array similarity join and present the design, optimization strategies, and evaluation of the first array similarity join operator. Meanwhile, if the data are rapidly updated, the join result can be considered as a view, which is defined by the similarity join. We model the process as incremental view maintenance with batch updates and give a three-stage heuristic that finds effective update plans. Moreover, the heuristic repartitions the array and the view continuously based on a window of past updates as a side-effect of view maintenance. We design an analytical cost model for integrating materialized array views in queries. A thorough experimental evaluation confirms that the proposed techniques are able to incrementally maintain a real astronomical data product in a production pipeline.

Biography

Weijie Zhao is a PhD student in the EECS graduate group at UC Merced, working with Prof. Florin Rusu. He received his BS from East China Normal University in Shanghai. His research interests include databases and scientific data management. Weijie is an avid computer programming contestant.

Jan. 27

Image Editing and Learning Filters for Low-level Vision, Yi-Hsuan Tsai and Sifei Liu, UC Merced

Abstract

In the first part of this talk, we present a sematic-aware image editing algorithm for automatic sky replacement. The key idea of our algorithm is to utilize visual semantics to guide the entire process including sky segmentation, search and replacement. First, we train a deep convolutional neural network for semantic scene parsing, which is used as visual prior to segment sky regions in a coarse-to-fine manner. Second, in order to find proper skies for replacement, we propose a data-driven scheme based on semantic layout of the input image. Finally, to re-compose the stylized sky with the original foreground naturally, an appearance transfer method is developed to match statistics locally and semantically. We show that the proposed algorithm can automatically generate a set of visually pleasing and realistic results. In the second part, a work on learning image filters for low-level vision is presented (e.g., edge-preserving filtering and denoising), in which a unified hybrid neural network is proposed. The network contains several spatially variant recurrent neural networks (RNN) as equivalents of a group of distinct recursive filters for each pixel, and a deep convolutional neural network (CNN) that learns the weights of RNNs. The proposed model does not need a large number of convolutional channels nor big kernels to learn features for low-level vision filters. Experimental results show that many low-level vision tasks can be effectively learned and carried out in real-time by the proposed algorithm.

Biography

Yi-Hsuan Tsai (https://sites.google.com/site/yihsuantsai/) received the B.S. in Electronics Engineering from National Chiao-Tung University, Hsinchu, Taiwan and the M.S. in Electrical Engineering and Computer Science from University of Michigan, Ann Arbor. He is currently working toward the PhD advised by Prof. Ming-Hsuan Yang at UC Merced and is the recipient of the Graduate Dean's Dissertation Fellowship in 2016. He was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. His research interests include computer vision, computational photography and machine learning with the focus on visual object recognition and image editing. He also did research internships at Qualcomm Research, Max Planck Institute and Adobe Research.
Sifei Liu (http://www.sifeiliu.net/) is a Ph.D candidate in Electrical Engineering and Computer Science with Prof. Ming-Hsuan Yang. Her research interests are in computer vision and machine learning. She completed her M.C.S. at University of Science and Technology of China (USTC) under Stan.Z Li and Bin Li, and received the B.S. in control science and technology from North China Electric Power University. She received the Baidu fellowship in 2013. In 2013 and 2014, she was a intern at Baidu Deep Learning Institute. In addition, she was a visiting student at the Chinese University of Hong Kong in 2015. She was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. Her research interests include computer vision, machine learning and computational photography.

Feb. 3

It's All about Cache, Ming Zhao, Arizona State University

Abstract

This talk is about cache, and more specifically, solid-state storage based cache for large-scale computing systems such as cloud computing and big data systems. With the increasing workload data intensity and increasing level of consolidation in such systems, storage is becoming a serous bottleneck. Emerging solid-state storage devices such as flash memory and 3D Xpoint have the potential to address this scalability issue by providing a new caching layer between main memory and hard drives in the storage hierarchy. However, solid-state storage has limited capacity and endurance, and needs to be managed carefully when used for caching. This talk will present several recent works done by the ASU VISA Research Lab for addressing these limitations and making effective use of solid-state caching.
First, the talk will introduce CloudCache, an on-demand cache allocation solution for understanding the cache demands of workloads and allocating the shared cache capacity efficiently. It is able to reduce a workload’s cache usage by 78% and the amount of writes sent to cache device by 40%, compared to traditional working-set-based approach. Second, the talk will present CacheDedup, an in-line cache deduplication solution that integrates caching and deduplication with duplication-aware cache replacement to improve the performance and endurance of solid-state caches. It can reduce a workload’s I/O latency by 51% and the amount of writes sent to cache device by 89%, compared to traditional cache management approaches. Finally, the talk will be concluded with a brief overview of the systems research at the VISA lab.

Biography

Ming Zhao is an associate professor of the Arizona State University (ASU) School of Computing, Informatics, and Decision Systems Engineering (CIDSE), where he directs the research laboratory for Virtualized Infrastructures, Systems, and Applications (VISA, http://visa.lab.asu.edu). His research is in the areas of experimental computer systems, including distributed/cloud, big-data, and high-performance systems as well as operating systems and storage in general. He is also interested in the interdisciplinary studies that bridge computer systems research with other domains. His work has been funded by the National Science Foundation (NSF), Department of Homeland Security, Department of Defense, Department of Energy, and industry companies, and his research outcomes have been adopted by several production systems in industry. Dr. Zhao has received the NSF Faculty Early Career Development (CAREER) award, the Air Force Summer Faculty Fellowship, the VMware Faculty Award, and the Best Paper Award of the IEEE International Conference on Autonomic Computing. He received his bachelor’s and master’s degrees from Tsinghua University, and his PhD from University of Florida.

Feb. 10

Visual Understanding: Face Parsing and Video Object Segmentation, Sifei Liu and Yi-Hsuan Tsai, UC Merced

Abstract

In the first part of this talk, we present a work for face parsing via conditional random field with unary and pairwise classifiers. We develop a novel multi-objective learning method that optimizes a single unified deep convolutional network with two distinct non-structured loss functions: one encoding the unary label likelihoods and the other encoding the pairwise label dependencies. Moreover, we regularize the network by using a nonparametric prior as new input channels in addition to the RGB image, and show that significant performance improvements can be achieved with a much smaller network size. Experiments show state-of-the-art and accurate labeling results on challenging images for real-world applications. In the second part, a work on video object segmentation is presented. It is a challenging problem due to fast moving objects, deformed shapes, and cluttered backgrounds. To obtain accurate segmentation across time, we propose an efficient algorithm that considers video segmentation and optical flow estimation simultaneously. For video segmentation, we formulate a principled, multi-scale, spatio-temporal objective function that uses optical flow to propagate information between frames. For optical flow estimation, particularly at object boundaries, we compute the flow independently in the segmented regions and recompose the results. We call the process "object flow" and demonstrate the effectiveness of jointly optimizing optical flow and video segmentation using an iterative scheme.

Biography

Sifei Liu (http://www.sifeiliu.net/) is a Ph.D candidate in Electrical Engineering and Computer Science with Prof. Ming-Hsuan Yang. Her research interests are in computer vision and machine learning. She completed her M.C.S. at University of Science and Technology of China (USTC) under Stan.Z Li and Bin Li, and received the B.S. in control science and technology from North China Electric Power University. She received the Baidu fellowship in 2013. In 2013 and 2014, she was a intern at Baidu Deep Learning Institute. In addition, she was a visiting student at the Chinese University of Hong Kong in 2015. She was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. Her research interests include computer vision, machine learning and computational photography.
Yi-Hsuan Tsai (https://sites.google.com/site/yihsuantsai/) received the B.S. in Electronics Engineering from National Chiao-Tung University, Hsinchu, Taiwan and the M.S. in Electrical Engineering and Computer Science from University of Michigan, Ann Arbor. He is currently working toward the PhD advised by Prof. Ming-Hsuan Yang at UC Merced and is the recipient of the Graduate Dean's Dissertation Fellowship in 2016. He was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. His research interests include computer vision, computational photography and machine learning with the focus on visual object recognition and image editing. He also did research internships at Qualcomm Research, Max Planck Institute and Adobe Research.

Feb. 17

Interactive Visual Computing for Knowledge Discovery in Science, Engineering, and Training, Jian Chen, University of Maryland, Baltimore County

Abstract

Imagine computer displays become a space to augment human thinking. Essential human activities such as seeing, gesturing, and exploring can couple with powerful computational solutions using natural interfaces and accurate visualizations. In this talk, I will present research effort to quantify visualization techniques of all kinds. Our ongoing work includes research in: (1) perceptually accurate visualization – constructing a visualization language to study how to depict spatially complex fields in quantum-physics simulations and brain-imaging datasets; (2) using space to compensate for limited human memory – developing new computing and interactive capabilities for bat-flight motion analysis in a new metaphorical interface; and (3) extending exploratory metaphors to biological pathways to make possible integrated analysis of multifaceted datasets. During the talk, I will point to a number of other projects being carried out by my team. I will close with some thoughts on automating the evaluation of visualizations and venture that a science of visualization and metaphors now has the potential to be developed in full, and that its success will be crucial in understanding data-to-knowledge techniques in traditional desktop and immersive settings.

Biography

Jian Chen is an Assistant Professor in the Department of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC), where she leads the Interactive Visual Computing Lab (http:// ivcl.umbc.edu) and UMBC’s Immersive Hybrid Reality Lab (http://tinyurl.com/ ztnvdmf). She maintains general research interests in the design and evaluation of visualizations (encoding of spatially complex brain imaging, integrating spatial and non-spatial data, perceptually accurate visualization, and event analysis) and interaction (exploring large biological pathways, immersive modeling, embodiment, and gesture input). She has garnered best-paper awards at international conferences, and her work is funded by NSF, NIST, and DoD. She is also an UMBC innovation fellow and a co-chair of the first international workshop on the emerging field of Immersive Analytics. Chen did her post-doctoral research at Brown University jointly with the Departments of Computer Science (with Dr. David H. Laidlaw) and Ecology and Evolutionary Biology. She received her Ph.D. in Computer Science from Virginia Tech with Dr. Doug A. Bowman. To learn about Jian Chen and her work, please visit http:// www.csee.umbc.edu/~jichen.

Feb. 24

Situated Intelligent Interactive Systems, Zhou Yu, Carnegie Mellon University

Abstract

Communication is an intricate dance, an ensemble of coordinated individual actions. Imagine a future where machines interact with us like humans, waking us up in the morning, navigating us to work, or discussing our daily schedules in a coordinated and natural manner. Current interactive systems being developed by Apple, Google, Microsoft, and Amazon attempt to reach this goal by combining a large set of single-task systems. But products like Siri, Google Now, Cortana and Echo still follow pre-specified agendas that cannot transition between tasks smoothly and track and adapt to different users naturally. My research draws on recent developments in speech and natural language processing, human-computer interaction, and machine learning to work towards the goal of developing situated intelligent interactive systems. These systems can coordinate with users to achieve effective and natural interactions. I have successfully applied the proposed concepts to various tasks, such as social conversation, job interview training and movie promotion. My team's proposal on engaging social conversation systems was selected to receive $100,000 from Amazon Inc. to compete in the Amazon Alexa Prize Challenge.

Biography

Zhou Yu is a graduating PhD student at the Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Prof. Alan W Black and Prof. Alexander I. Rudnicky. She interned with Prof. David Suendermann-Oeft in ETS San Francisco Office on cloud based mulitmodal dialog systems in 2015 summer and 2016 summer. She also interned with Dan Bohus and Eric Horvitz in Microsoft Research on human-robot interaction in 2014 Fall. Prior to CMU, she received a B.S. in Computer Science and a B.A. in Linguistics from Zhejiang University in 2011. She worked with Prof. Xiaofei He and Prof. Deng Cai on Machine Learning and Computer Vision, and Prof. Yunhua Qu on Machine Translation.

March 3

Moving Towards Customizable Autonomous Driving, Chandrayee Basu, UC Merced

Abstract

In this talk, I will first present the results of my first research project on autonomous driving as a PhD student in UC Merced. This work was conducted in collaboration with Berkeley Deep Drive (http://bdd.berkeley.edu/project/implicit-communication-through-motion). With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. From the days of Alvinn to the latest autonomous driving technologies like NVidia’s Drive PX, researchers have used Learning from Demonstration to teach autonomous cars how to drive. Therefore, when it comes to customization of autonomous driving, a common answer is that the car should adopt the user’s style. In this project, we questioned this assumption and conducted user research in driving simulator to test our hypothesis. We found that users tend to prefer a significantly more defensive driving style than their own. Interestingly, they prefer the style they think is their own, even though their actual driving style tends to be more aggressive. The results show that conventional Learning from Demonstration algorithms will be inadequate for personalizing autonomous driving. In the second part of the talk I will discuss the implications of this result in greater detail and present some of the potential algorithms that we can use to augment user demonstrations.

Biography

Chandrayee Basu (http://chandrayee-basu.squarespace.com) is a 2nd year Ph.D. student in UC Merced advised by Prof. Mukesh Singhal, UC Merced and Prof. Anca Dragan, UC Berkeley. She is applying Human-Robot Interaction Algorithms to integrate human interaction into motion planning of autonomous cars. Chandrayee has multi-disciplinary research experience in design, applied machine learning, smart environment and human-robot interaction that she acquired as a Graduate student at UC Berkeley and Carnegie Mellon University.

March 10

Inventing in the Research Lab vs Startups, David Merrill, Lemnos Labs Inc.
This talk is part of the EECS | CITRIS Frontiers in Technology Series - Special Room: COB2-140

Abstract

In this talk I will compare and contrast research vs startup innovation, based on my experiences at Stanford, the MIT Media Lab and Bay Area startups. I’ll discuss how the desired outcomes of each context encourages different kinds of risk and exploration, takeaways from my research experiences, and how we structure the early ideation process at Lemnos Labs where I am an Entrepreneur In Residence.

Biography

David Merrill is a technology executive and hardware startup founder with a computer science and human computer interaction background. His tactile learning system startup Sifteo - based on his Ph.D. from MIT - was acquired by drone-maker 3D Robotics in 2014 to become the kernel of a new consumer product group. At 3D Robotics he took various roles on the team that launched Solo: the Smart Drone in 2015, then he led R&D and IP. Alumnus of MIT, Stanford Computer Science and Symbolic Systems. TED speaker. Human-computer interaction expert. Drone builder. His work has been featured by the Discovery Channel, Popular Science, Wired, and the New York Times. Merrill is currently Entrepreneur in Residence at Lemnos Labs, an early-stage VC firm in San Francisco, working on the next project.

March 17

Data-Based Full-Body Motion Coordination and Planning, Alain Juarez-Perez, UC Merced

Abstract

In this talk I will present new approaches for achieving full-body motion coordination for humanoid virtual characters. I will first present a parametric data-based mobility controller with known coverage and validity characteristics, achieving flexible real-time deformations for locomotion control. I will then present a method for switching between different types of locomotion in order to navigate cluttered environments. The proposed method incorporates the locomotion capabilities of the character in the path planning stage, producing paths that address the trade-off between path length and locomotion behavior choice for handling narrow passages. In the last part, I will introduce a new approach for the coordination of locomotion with manipulation. The approach is based on a coordination model built from motion capture data in order to represent proximity relationships between the action and the environment. The result is a real-time controller that can autonomously produce environment-dependent full-body motion strategies. The obtained coordination model is successfully applied on top of generic walking controllers, achieving controllable characters which are able to perform complete full-body interactions with the environment.

Short Bio

Alain Juarez-Perez is a Ph.D. candidate in the Electrical Engineering and Computer Science graduate group of the University of California, Merced. His work is being developed at the Computer Graphics Lab under the advice of Prof. Marcelo Kallmann and has been supported by a UC-Mexus Doctoral Fellowship. He received his B.S. in Computer Science in 2012 from the University of Guanajuato, and in 2014 he was a visiting research assistant at the USC Institute for Creative Technologies. His research interests include Computer Animation, Data-Driven Algorithms, Motion Capture, Computational Geometry, Machine Learning, Computer Graphics and Motion Planning.

March 24

Securing Internet of Things, Chen Qian, UC Santa Cruz

Abstract

In this talk, I will introduce my recent research projects on Internet of Things (IoT) security. First, I will introduce a physical layer authentication method for RFID tags. Second, I will talk about a fast and reliable protocol for authentication and key agreement among multiple IoT devices based on wireless signal information. Third, I will introduce an IoT data communication framework that guarantees data authenticity and integrity.

Biography

Chen Qian is an Assistant Professor in Department of Computer Engineering at University of California Santa Cruz. He was on the faculty of Computer Science at University of Kentucky 2013-2016. He got his PhD degree from The University of Texas at Austin, where he worked with Simon Lam. His research interests include computer networking and distributed systems, Internet of Things, network security, and cloud computing. He has published more than 60 papers, most which appeared in top journals and conferences including ToN, TPDS, ICNP, INFOCOM, ICDCS, SIGMETRICS, CoNEXT, CCS, NDSS, Ubicomp, etc.

April 7

Stochastic distribution control and its applications Hong Wang, PNNL

Abstract

This seminar presents a brief and selected survey on the advances on stochastic distribution control, where the purpose of the controller design is to control the shape of output probability density functions (pdf) of non-Gaussian and general stochastic systems. This research was motivated through the requirement of distribution shape control of a number of practical systems. In recent years much research has been performed internationally and journal special issues and invited session at major conferences have been seen since 2001. It is expected that this seminar will provide with some up-to-date information on this new area.

Biography

Dr. Hong Wang joined PNNL in February 2016 as a Laboratory Fellow. He is based in the Controls team within the Electricity Infrastructure and Buildings Division of the Energy and Environment Directorate. Prior to joining PNNL, he was a full (chair) professor in process control at the University of Manchester in the U.K. Dr. Wang 's research interests are in advanced modelling and control for complex industrial processes, and fault diagnosis and tolerant control. He originated the research on stochastic distribution control where the main purpose of control input design is to make the shape of the output probability density functions to follow a targeted function. This area alone has found a wide spectrum of potential applications in modelling, data-mining, signal processing, optimization and distributed control systems design. Dr. Wang is the lead author of five books and has published over 300 papers in international journals and conferences. He is a member of three International Federation of Automatic Control Technical Committees and associate editor for IEEE Transactions on Control Systems Technology, IEEE Transactions on Automation Science and Engineering, and seven other international journals. He has been the associate editor of IEEE Transactions on Automatic Control, and has served as IPC member and conference chairman for many international conferences. Dr. Wang has also received several best paper awards from International Conferences including the best paper awards at Int. Conf. Control 2006, the Jasbar Memorial Prize for his outstanding contribution in the Science and Technology Development for paper industries in 2006, best theory paper award at World Congress on Intelligent Control and Automation in 2014 and one of the five finalists for the best application paper prize at 2014 IFAC World Congress. Dr. Wang holds a Ph.D. degree from Huazhong University of Science and Technology (HUST) in P R China.

April 14

No seminar.

April 21

Bridging the Gap in Grasp Quality Evaluation, Shuo Liu, UC Merced

Abstract

Robot grasp planning has been extensively studied in the last decades and it often consists of two different stages, i.e., where to grasp an object and how to measure the quality of a tentative grasp. Because these two processes are computationally demanding, form closure grasps are more widely used in practice instead of force closure grasps, even though the latter is in many cases preferable. In this talk, we introduce our framework to improve grasp quality evaluation. We accelerate the computation of the grasp wrench space, used to measure the grasp quality, by exploiting some geometric insights in the computation of convex hull. In particular, we identify a cutoff sequence to terminate the convex hull calculation with guaranteed convergence to the quality measure. Furthermore, we study how noise at each joint of the manipulator affects grasp quality. Different arm configurations will generate different noise distributions at the end-effector which have a huge impact in the robustness of grasping. In the last part of the talk, I will introduce a grasp planner taking into account the local geometry of the object to be grasped. In particular, for concave objects we exploit the fact that grasping at the concave region can make the grasp more robust. These insights are studied in theory and validated on an experimental platform.

Biography

Shuo Liu received his B.Sc. degree in computer engineer from University of Minnesota in 2012 and his B.Sc. degree in mathematics from Beijing Jiaotong University in 2012. In his Junior year of college, he participated in 2011 Robocup and won the champion title in the Middle Size League. He also participated in IROS 2016 Grasping and Manipulation Challenge and won 2nd place in automation track. Since August 2012 he is pursuing a Ph.D. degree in electrical engineering and computer science at the University of California, Merced, working with Dr.Stefano Carpin. His interests include manipulation, grasping, and computational geometry.

April 28

Multicopter dynamics and control: surviving the complete loss of multiple actuators and rapidly generating trajectories, Mark Mueller, Mechanical Engineering Dept., UC Berkeley

Abstract

Flying robots, such as multicopters, are increasingly becoming part of our everyday lives, with current and future applications including personal transportation, delivery services, entertainment, and aerial sensing. These systems are expected to be safe and to have a high degree of autonomy. This talk will discuss the dynamics and control of multicopters, including some research results on trajectory generation for multicopters and fail-safe algorithms. Finally, we will present the application of a failsafe algorithm to a fleet of drones performing as part of a live theatre performance on New York's Broadway.

Biography

Mark W. Mueller joined the mechanical engineering department at UC Berkeley in September 2016. He completed his PhD studies, advised by Prof. Raffaello D'Andrea, at the Institute for Dynamic Systems and Control at the ETH Zurich at the end of 2015. He received a bachelors degree from the University of Pretoria, and a masters from the ETH Zurich in 2011, both in Mechanical Engineering. http://www.me.berkeley.edu/people/faculty/mark-mueller

May 5

Robots for the Real World, James Gosling, Liquid Robotics
This talk is part of the EECS | CITRIS Frontiers in Technology Series - Special Room: COB2 - 140

Fall 2017

Aug. 25

Introduction to EECS 290, Mukesh Singhal, UC Merced

Sept. 15

Scalable Asynchronous Gradient Descent Optimization for Big Models, Torres Martin, UC Merced

Abstract

The number of features in models has been steadily growing and it is now common to see models with millions or even billions of features. However, existing data analytics systems approach predictive model training exclusively from a data-parallel perspective by partitioning the data to multiple workers and executing computations concurrently over different partitions. Although various synchronization policies are used to emphasize speedup or convergence, there is little attention on model management and its importance for effective training. In this work, we present a general framework for parallelizing stochastic optimization algorithms over massive models that cannot fit in memory by vertically partitioning the model offline and asynchronously updating the resulting partitions online. We identify suboptimal behavior in the naive implementation and minimize concurrent requests to the common model by introducing a preemptive push-based sharing mechanism. Our experimental results show improved convergence over HOGWILD! for both real and synthetic datasets and is the only solution scalable to massive models. .

Biography

Martin Torres is a PhD student in the EECS graduate group at UC Merced, working with Prof. Florin Rusu. He received his BS in Computer Science and Cognitive Science from California State University Stanislaus. His research includes large-scale data analytics, focusing on optimizing various machine learning algorithms across different architectures and systems. .

Sept. 22

Urban Impervious Surface Extraction Using High-Resolution Remote Sensing Images, Dr. Zhenfeng Shao, State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University

Abstract

Impervious surfaces are anthropogenic features through which water cannot infiltrate into the soil. Impervious surface is a significant indicator of the degree of urbanization and the quality of the urban eco-environment. The rapid development of urbanization brings massive expansion of impervious surfaces, influencing the regional eco-environment and restricting regional sustainable development.

This talk will focus on methods for urban impervious surface extraction from high resolution remote sensing images. An object-oriented framework is proposed. Buffalo in America and other cities in China are selected as case study areas. Various high-resolution images including IKNOS, GeoEye, GF-1, GF-2, ZY3 and other mapping satellites are used. The challenges and future work such as impervious surfaces dynamic monitoring will be discussed.

Biography

Zhenfeng Shao, a full Professor at Wuhan University, China, received the Ph.D. degree in photogrammetry and remote sensing from Wuhan University, China, in 2004, working with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS). He has published more than 40 peer-reviewed articles in international journals. His research interests include high-resolution image processing and remote sensing applications.

Dr. Shao was a recipient of the Talbert Abrams Award for the Best Paper in Image Matching from the American Society for Photogrammetry and Remote Sensing in 2014, and the New Century Excellent Talents in University from the Ministry of Education of China in 2012.

Oct. 06

Learning Binary Hash Functions: Optimisation- and Ensemble-based Approaches, Ramin Raziperchikolaei, UC Merced

Abstract

An attractive approach for fast search in image databases is binary hashing, where a hash function maps each high-dimensional, real-valued image onto a low-dimensional, binary vector. The search for similar images is done in the binary space, which is much faster because of using hardware operations to compute the Hamming distances. But, the binary hashing introduces error, which means that images that were originally similar in the real space may not be similar anymore in the binary space. The main goal of the binary hashing is to reduce this error as much as possible by learning hash functions that map dis/similar images onto dis/similar binary codes. In this talk, I will describe our work on finding better ways to learn hash functions. In the first part of my talk, I will focus on the optimisation-based approaches. In this approach, a complicated objective function is defined over the parameters of the hash function. Optimising this nonconvex and nonsmooth objective function is difficult because the output of the hash function is binary. The previous hashing papers ignore the binary constraints and use ad-hoc methods in solving the problem. In our work, we use the "method of auxiliary coordinates (MAC)" to optimise the objective function correctly, by preserving the binary constraints and learning the binary codes and the hash functions jointly. This better optimisation leads to learning better hash functions, which perform more accurately in the nearest neighbors search. The main difficulty of the optimisation-based approach is that all the single-bit hash functions are coupled inside the objective function, which makes the optimisation slow. In the second part of my presentation, I will talk about our proposed ensemble-based approach, which overcomes the main difficulty of the previous approach. The idea is to learn the single-bit hash functions independently and combine them to achieve the final hash function. We use the ensemble-based techniques to make sure that the hash functions are different. This approach gives us several advantages like simpler optimisation problems, massive parallelization, and better performance in image retrieval. Finally, we show that the diversity-based approaches can get even simpler by guessing the single-bit binary codes of the images.

Biography

Ramin Raziperchikolaei is a PhD candidate in the EECS Department of UC Merced. He received his BS in Computer Engineering in 2010 from Iran University of Science and Technology and MS in Artificial Intelligence in 2012 form Sharif Univeristy of Technology, Iran. Since 2013, he has been working towards his PhD degree in machine learning. His research has been focused on learning binary hash functions for fast image retrieval problems.

Oct. 13

Computational Social Science, Professor Alex Petersen, UC Merced

Abstract

The timely combination of accessible computational tools and data availability have led to advances across a wide range of scientific domains in the digital era. In this way, computational tools & methods represent a scientific ‘commons’ that brings together researchers from different disciplines, facilitating interdisciplinary endeavors and cross-disciplinary career paths. As a result, several new researcher communities have sprouted, (e.g. Quantitative Social Science, Computational Social Science, Data Science, Digital Humanities, etc.) which all occupy and leverage this commons. In this talk I will discuss the ‘data science’ pipeline, which includes identifying data sources; accessing or “scraping” raw data; cleaning, organizing, merging and identifying potential pitfalls in the data; exploring and visualizing the underlying statistics; and finally modeling the data in the context of relevant research questions. I will provide examples of computationally-driven social science from my own research, pertaining to the Science of Science & Innovation, as well as an example of the data science pipeline using the Zillow API that produces longitudinal cross-sectional data on housing prices in several local cities to UC Merced.

Biography

Dr. Petersen is an assistant professor in the Management of Complex Ststems unit at UC Merced. His research combines perspectives and methods from statistical physics, network science, computational social science, and economometrics, in order to model science and innovation processes that occur across multiple scales: from individual publications and careers to national innovation systems.

Oct. 20

Optimizing Memory Efficiency for Deep Neural Networks on GPUs, Dr. Chao Li, Qualcomm Research

Abstract

Deep Neural Nets such as Deep Convolutional Networks have achieved state-of-the-art results in various computer vision tasks. Leveraging large training data sets, deep Convolutional Neural Networks (CNNs) evovles to be a deep multi-layer computational structure for high recognition accuracy. Due to the substantial compute and memory operations, however, they require significant execution time. The massive parallel computing capability of GPUs make them as one of the ideal platforms to accelerate CNNs and a number of GPU-based CNN libraries have been developed. While existing works mainly focus on the computational efficiency of CNNs, the memory efficiency of CNNs have been largely overlooked. Yet CNNs have intricate data structures and their memory behavior can have significant impact on the performance. In this talk, I will present our study on optimizing the memory efficiency for DNNs on GPUs. Specifically, we study the memory efficiency of various CNN layers and reveal the performance implication from both data layouts and memory access patterns. Experiments show the universal effect of our proposed optimizations on both single layers and various networks, with up to 27.9x for a single layer and up to 5.6x on the whole networks.

Biography

Chao Li is currently Senior System Engineer (Researcher) in Qualcomm GPU Research Team. His working area lies in computer architecture and programming, especially on exploring new performance features for next-generation GPU systems. He obtained the PhD degree in Computer Engineering at North Carolina State University in 2016. His works have been published in top computer system conferences such as SC, ICS, CGO, PPoPP, ISPASS, etc. He received the ACM/IEEE Supercomputing Conference Best Student Paper Finalist Award in 2016.

Oct. 27

Say Hello to Waymo, Dr. Ioan Sucan Waymo

Abstract

DDriving is integral part of our lives. We do it for fun, but it is more often a necessity. Unfortunately, it is not always safe, and it almost always takes more time than we'd like. Worldwide, 1.2M people die annually on our roadways. In the US alone, we kill 35,000 people a year, the equivalent of a 737 falling out of the sky every working day of the entire year. The vast majority of these accidents involve human error. This makes self-driving technology an enticing promise to greatly improve safety on the road. This talk will provide an overview of Waymo's self-driving technology, with a focus on safety considerations.

Biography

CIoan A. Șucan is currently a Research Software Engineer at Waymo (formerly part of X / Google[x]), working on motion planning for self-driving cars. Before joining Waymo, Dr. Șucan was a Research Scientist at Willow Garage, where he worked on a number of open-source software projects. Dr. Șucan's most well known contributions are MoveIt!, a motion planning and manipulation framework, and the Open Motion Planning Library (OMPL), a software library of sampling-based motion planning algorithms. Dr. Șucan received the Ph.D. and M.S. degrees in computer science from Rice University, Houston TX, in 2011 and 2008, respectively, under the supervision of Prof. Lydia Kavraki. He received the B.S. degree in electrical engineering and computer science from Jacobs University, Bremen, Germany, in 2006.

Nov. 3

Towards Accelerator-Rich Architectures and Systems, Dr. Zhenman Fang Xilinx

Abstract

With Intel's $16.7B acquisition of Altera and the deployment of FPGAs in major cloud service providers including Microsoft and Amazon, we are entering a new era of customized computing. In future architectures and systems, it is anticipated that there will be a sea of heterogeneous accelerators customized for important application domains, such as machine learning and personalized healthcare, to provide better performance and energy-efficiency. Many research problems are still open, such as how to efficiently integrate accelerators into future chips and commodity datacenters, and how to program such accelerator-rich architectures and systems.

In this talk, I will first give a quick overview of my research on accelerator-rich architectures and systems, which spans from application drivers to underlying computer architectures. Then I will present our recent work on CPU-accelerator co-design, where we provide efficient and unified address translation support between CPU cores and accelerators [HPCA 2017 Best Paper Nominee]. It shows that a simple two-level TLB design for accelerators plus the host core MMU for accelerator page walking can be very efficient. On average, it achieves 7.6x speedup over the naïve IOMMU and there is only 6.4% performance gap to the ideal address translation. Third, I will present the concept of accelerators-as-a-service in cloud deployment and introduce our open-source Blaze prototype system [ACM SOCC 2016]. Blaze provides programming and runtime support to enable easy and efficient FPGA accelerator integration into state-of-the-art big data framework Apache Spark. By deploying a PCIe-based FPGA board into each CPU server using Blaze, it can consolidate the cluster size by several folds while providing the same system throughput. Finally, I will talk about some future research that will enhance architecture, programming, compiler, runtime, and security support to accelerator-rich architectures and systems.

Biography

Dr. Zhenman Fang has been a postdoc in UCLA Computer Science Department since July 2014, and recently moved to Xilinx San Jose in mid Sept. During his postdoc, Zhenman worked with Prof. Jason Cong and Prof. Glenn Reinman, and was also a member of two multi-university centers: Center for Domain-Specific Computing (CDSC) and Center for Future Architectures Research (C-FAR). Zhenman received his PhD in June 2014 from Fudan University, China and spent the last 15 months of his PhD program visiting University of Minnesota at Twin Cities.

Zhenman's research lies at the boundary of heterogeneous and energy-efficient computer architectures, big data workloads and systems, and system-level design automation. He has published 10+ papers in top venues that span across computer architecture (HPCA, TACO, ICS), design automation (DAC, ICCAD, FCCM, IEEE Design & Test), and cloud computing (ACM SOCC). Moreover, he also actively serves on the organizing committee and program committee of top conferences including HPCA 2017, ICCD 2017, IISWC 2017, DATE 2018, and ICS 2018. Finally, he has received several awards, including a best paper nominee of HPCA 2017, a best paper award of MEMSYS 2017, a postdoc fellowship from UCLA, a best demo award at the C-FAR center annual review. More details can be found in his personal website: https://sites.google.com/site/fangzhenman/.

Dec. 1

Value Alignment in Artificial Intelligence, Dylan Hadfield-Menell, UC Berkeley

Abstract

I will give an overview of some of the recent work we have been pursuing in formalizing, understanding and solving 'the value alignment problem.' Loosely speaking, this is the problem of ensuring that an AI system's behavior aligns with its designer or users intended objective. This is closely related to the well-studied principal-agent problem from economics, where a firm needs to align an employee's incentives with the firm's ultimate goal. I will present Cooperative Inverse Reinforcement Learning, our initial attempt to mathematically formalize the value alignment problem and discuss the implications of our framework for human robot interaction and robust AI design. .

Biography

I'm a fifth year Ph.D. student at UC Berkeley, advised by Anca Dragan, Pieter Abbeel, and Stuart Russell. My research focuses on the value alignment problem in artificial intelligence. My goal is to design algorithms that learn about and pursue the intended goal of their users, designers, and society in general. My recent work has focused on algorithms for human-robot interaction with unknown preferences and reliability engineering for learning systems.

I'm also interested in work that bridges the gap between AI theory and practical robotics and work on the problem of integrated task and motion planning. Before coming to Berkeley, I did a Master's of Engineering with Leslie Kaelbling and Tomás Lozano-Pérez at MIT. When I'm not working on research I'm usually wrapped up in a Sci-Fi or Fantasy novel, playing ultimate frisbee, or skiing. .

Dec. 8

AuCloud: the Cloud for the Transportation Industry, Dr. Carlos Garcia-Alvarado, Autonomic, Inc.