Welcome to the Electrical Engineering and Computer Science Technical Seminar Series.
The series is intended to provide breath of exposure to a variety of state-of-the-art research projects in electrical engineering and computer science topics to our EECS graduate students, as well as promote collaboration and knowledge exchange with distinguished researchers in the EECS field and related disciplines.
Fall 2019
All seminars take place on Friday at 11:45 a.m. in COB1 (CLSSRM) 263 unless otherwise posted
Aug. 30
Fabio Porto, LNCC, Petropolis, Rio de Janeiro, Brazil
Host: Dr. Florin Rusu
Abstract
Awareness of fluctuating levels of cognitive performance measures will support better management of tasks, allow for the development of new adaptable user interfaces informed by cognitive states, and will potentially serve long term health of users by avoiding frustration resulting from mismatched task demand and available cognitive performance. Technology pervasively surrounds us and enables a virtually uninterrupted information retrieval and distribution, resulting in a constant communication between people and computers. One of the key functions of a computer is to support its user and react to input with the response expected or desired by the users, requiring an understanding of context. By using explicit and implicit input modalities we can increase the information density and allow computers to better interpret the user's context, making them context-aware. This research has mainly used consumer products, such as eyewear equipped with sensors for measuring eye movement, and infrared cameras and sensors to obtain measurements of changing facial temperature. We are showing that available solutions enable us to infer states of alertness, sustained attention, and cognitive workload. The concepts, results, and tools detailed in this research, can help professionals, researchers and students to gain insights into the potential of context-aware systems, here in particular cognition-aware systems.
Biography
Benjamin Tag is a Ph.D. candidate and researcher at the Graduate School of Media Design at Keio University. His research interest is located in the fields of Human-Computer Interaction, with special focus on Cognition-Aware Systems. He is investigating ways to understand human cognition by combining methods from the fields of cognitive psychology and pervasive computing. Specifically, he is interested in using ubiquitous technologies to augment the process of knowledge acquisition through implementing proactive recommender and intervention systems.
Sept. 6
Longfri Shangguan, Microsoft Cloud and AI
Host: Dr. Wan Du
Abstract
TBD
Biography
TBD
Sept. 13
Mostafa Mirshekari, Carnegie Mellon University
Host: Dr. Shijia Pan
Abstract
TBD
Biography
TBD
Sept. 20
Yu Zhang, University of California, Santa Cruz
Host: Dr. Wan Du
Abstract
TBD
Biography
TBD
Sept 27
TBD
Host:
Abstract
A common task for users of desktop or mobile computers is the input of text. Whether preparing a report or 'texting' a friend about where to have lunch, we can't avoid this ubiquitous computing task. This talk will explore analytic methods of characterizing and comparing text input methods. We are interested in quantifying the work invested to enter text. For the Qwerty keyboard, entering "hello" takes five keystrokes. If input uses a soft keyboard on a smartphone combined with word completion, fewer keystrokes, or finger taps, are required. A special case is an ambiguous keyboard: fewer keys, >1 letter per key. The phone keypad places 26 letters on just 8 keys. But, what about other keyboards with 26 letters on, say, 7 keys, or 6 keys, or 5 keys, or ... How about text entry with just one key? This talk will present, compare, and quantify the text input process for a variety of keyboards, some with as few a 1 key.
Biography
Scott MacKenzie's research is in human-computer interaction with an emphasis on human performance measurement and modeling, experimental methods and evaluation, interaction devices and techniques, text entry, touch-based input, language modeling, accessible computing, gaming, and mobile computing. He has more than 170 peer-reviewed publications in the field of Human-Computer Interaction (including more than 30 from the ACM's annual SIGCHI conference) and has given numerous invited talks over the past 25 years. In 2015, he was elected into the ACM SIGCHI Academy. That same year he was the recipient of the Canadian Human-Computer Communication Society's (CHCCS) Achievement Award. Since 1999, he has been Associate Professor of Computer Science and Engineering at York University, Canada.
Oct. 4
TBD
Host:
Abstract
TBD
Biography
TBD
Oct. 11
TBD
Host:
Abstract
TBD
Biography
TBD
Oct.18
TBD
Host: Dr. YangQuan Chen
Abstract
TBD
Biography
TBD
Oct. 25
TBD
Host:
Abstract
TBD
Biography
TBD
Nov. 1
TBD
Host:
Abstract
TBD
Biography
TBD
Nov. 8
TBD
Host:
Abstract
Visual synthesis is the process of creating new data through manipulating, editing or re-organizing existing data. However, attempts from non-experts often end up deviating from the manifold of real natural data, leading to unrealistic results with undesired artifacts. The goal of my research is to develop effective computational models to facilitate more realistic and stunning creations, which will bring brand new user experiences and transform the ways we communicate and collaborate. Along this direction, I have explored four different topics, including image enhancement, completion, stylization and video prediction. In this talk, I will mainly introduce the background and achievements of one topic, i.e., image/video stylization, which focuses on recomposing an image with new styles. Such a technique is not only useful for novel designs and creations, but also is an important step towards the understanding of factors that constitute images.
Biography
Yijun Li (https://sites.google.com/site/yijunlimaverick/) is a Ph.D. student in Vision and Learning Lab at University of California Merced, working with Prof. Ming-Hsuan Yang. His research interests lie in the areas of computer vision, computational photography, and machine learning. He previously received his M.S. degree from Shanghai Jiao Tong University and B.S. degree from Zhejiang University.
Abstract
Despite the long history of image and video stitching research, existing academic and commercial solutions still produce strong artifacts. In this work, we propose a wide-baseline video stitching algorithm that is temporally stable and tolerant to strong parallax. Our key insight is that stitching can be cast as a problem of learning a smooth spatial interpolation between the input videos. To solve this problem, inspired by pushbroom cameras, we introduce a fast pushbroom interpolation layer and propose a novel pushbroom stitching network, which learns a dense flow field to smoothly align the multiple input videos with spatial interpolation. Our approach outperforms the state-of-the-art by a significant margin, as we show with a user study, and has immediate applications in many areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.
Biography
Wei-Sheng Lai (http://graduatestudents.ucmerced.edu/wlai24/) is a Ph.D. candidate of Electrical Engineering and Computer Science at the University of California Merced, under the advisement of Prof. Ming-Hsuan Yang. He received the B.S. and M.S. degree in Electrical Engineering from the National Taiwan University, Taipei, Taiwan, in 2012 and 2014, respectively. His research interests include computer vision, computational photography, and machine learning.
Nov. 15
TBD
Host:
Abstract
TBD
Biography
TBD
Nov. 22
Vinay Pilania, Mercedes-Benz REsearch & Development
Host: Dr. YangQuan Chen
Abstract
TBD
Biography
TBD
Dec. 6
TBD
Host:
Dec. 13
TBD
Host:
__
Spring 2019
Jan. 25
Physiological Sensing on the Face for inferring Cognitive States Benjamin Tag Ph.D. Candidate, Graduate Shcool of Media Design, Keio University, Japan
Host: Dr. Ahmed Arif
Abstract
Awareness of fluctuating levels of cognitive performance measures will support better management of tasks, allow for the development of new adaptable user interfaces informed by cognitive states, and will potentially serve long term health of users by avoiding frustration resulting from mismatched task demand and available cognitive performance. Technology pervasively surrounds us and enables a virtually uninterrupted information retrieval and distribution, resulting in a constant communication between people and computers. One of the key functions of a computer is to support its user and react to input with the response expected or desired by the users, requiring an understanding of context. By using explicit and implicit input modalities we can increase the information density and allow computers to better interpret the user's context, making them context-aware. This research has mainly used consumer products, such as eyewear equipped with sensors for measuring eye movement, and infrared cameras and sensors to obtain measurements of changing facial temperature. We are showing that available solutions enable us to infer states of alertness, sustained attention, and cognitive workload. The concepts, results, and tools detailed in this research, can help professionals, researchers and students to gain insights into the potential of context-aware systems, here in particular cognition-aware systems.
Biography
Benjamin Tag is a Ph.D. candidate and researcher at the Graduate School of Media Design at Keio University. His research interest is located in the fields of Human-Computer Interaction, with special focus on Cognition-Aware Systems. He is investigating ways to understand human cognition by combining methods from the fields of cognitive psychology and pervasive computing. Specifically, he is interested in using ubiquitous technologies to augment the process of knowledge acquisition through implementing proactive recommender and intervention systems.
Feb. 1
Large Scale Training of Deep Convolutional Neural NetworksDr. Naoya Maruyama Research Scientist, Lawrence Livermore National Lab
Host: Dr. Dong Li
Abstract
TBD
Biography
TBD
Feb. 8
Towards Better User Interfaces for 3D Dr. Wolfgang Suerzlinger Professor, School of Interactive Arts + Technology, Simon Fraser University, Canada
Host: Dr. Ahmed Arif
Abstract
TBD
Biography
TBD
Feb. 15
Multi-robot Exploration of Spatial-temporal Varying Fields, Dr. Wencen Wu, Assistant Professor, Department of Computer Engineering, San Jose State University
Host: Dr. Wan Du
Abstract
TBD
Biography
TBD
Feb. 22
Analytics for Text Entry Methods and Research, Dr. Scott MacKenzie, Associate Professor, Department of Electrical Engineering & Computer Science, York University, Canada
Host: Dr. Ahmed Arif
Abstract
A common task for users of desktop or mobile computers is the input of text. Whether preparing a report or 'texting' a friend about where to have lunch, we can't avoid this ubiquitous computing task. This talk will explore analytic methods of characterizing and comparing text input methods. We are interested in quantifying the work invested to enter text. For the Qwerty keyboard, entering "hello" takes five keystrokes. If input uses a soft keyboard on a smartphone combined with word completion, fewer keystrokes, or finger taps, are required. A special case is an ambiguous keyboard: fewer keys, >1 letter per key. The phone keypad places 26 letters on just 8 keys. But, what about other keyboards with 26 letters on, say, 7 keys, or 6 keys, or 5 keys, or ... How about text entry with just one key? This talk will present, compare, and quantify the text input process for a variety of keyboards, some with as few a 1 key.
Biography
Scott MacKenzie's research is in human-computer interaction with an emphasis on human performance measurement and modeling, experimental methods and evaluation, interaction devices and techniques, text entry, touch-based input, language modeling, accessible computing, gaming, and mobile computing. He has more than 170 peer-reviewed publications in the field of Human-Computer Interaction (including more than 30 from the ACM's annual SIGCHI conference) and has given numerous invited talks over the past 25 years. In 2015, he was elected into the ACM SIGCHI Academy. That same year he was the recipient of the Canadian Human-Computer Communication Society's (CHCCS) Achievement Award. Since 1999, he has been Associate Professor of Computer Science and Engineering at York University, Canada.
March 1
A New Discipline for a New Technology: Food Informatics and the Internet of Food, Dr. Matthew Lange, Research Scientist & Professional Food and Health Informatician, Food Science and Technology Department, University of California, Davis
In conjuction with the CITRIS FIT Seminar
Host: Dr. Joshua Viers
Abstract
TBD
Biography
TBD
March 8
Fast and Parallelizable Ranking with Outliers from Pairwise Comparisons, Mahshid (Ashley) Montazer Qaem, Ph.D. Candidate, Electrical Engineering and Computer Science Department, University of California, Merced
Host: Dr. Sungjin Im
Abstract
TBD
Biography
TBD
March 22
Stochastic Gradient Descent on Modern Hardware: Multi-core CPU or GPU? Synchronous or Asynchronous? Yujing Ma, Ph.D. Candidate, Electrical Engineering and Computer Science Department, University of California, Merced
Host: Dr. Florin Rusu
Abstract
TBD
Biography
TBD
April 5
Generative Models for Robots Learning, Dr. Ajay Kumar Tanwani, Postodoctoral Scholar, Electrical Engineering and Computer Science, University of California, Berkeley
Host: Dr. Stefano Carpin
Abstract
TBD
Biography
TBD
April 12
D3D: Distilled 3D Networks for Video Action Recognition, Dr. David Ross Researcher, Google AI
Host: Dr. Miguel Carreira-Perpinan
Abstract
TBD
Biography
TBD
April 19
'Learning to Synthesize for Natural Image and Video Editing', and 'Learning to Stitch Videos', Yijun Li and Wei-Sheng Lai, Ph.D. Student and Ph.D. Candidate, Electrical Engineering and Computer Science Department, University of California, Merced
Host: Dr. Alberto Cerpa
Abstract
Visual synthesis is the process of creating new data through manipulating, editing or re-organizing existing data. However, attempts from non-experts often end up deviating from the manifold of real natural data, leading to unrealistic results with undesired artifacts. The goal of my research is to develop effective computational models to facilitate more realistic and stunning creations, which will bring brand new user experiences and transform the ways we communicate and collaborate. Along this direction, I have explored four different topics, including image enhancement, completion, stylization and video prediction. In this talk, I will mainly introduce the background and achievements of one topic, i.e., image/video stylization, which focuses on recomposing an image with new styles. Such a technique is not only useful for novel designs and creations, but also is an important step towards the understanding of factors that constitute images.
Biography
Yijun Li (https://sites.google.com/site/yijunlimaverick/) is a Ph.D. student in Vision and Learning Lab at University of California Merced, working with Prof. Ming-Hsuan Yang. His research interests lie in the areas of computer vision, computational photography, and machine learning. He previously received his M.S. degree from Shanghai Jiao Tong University and B.S. degree from Zhejiang University.
Abstract
Despite the long history of image and video stitching research, existing academic and commercial solutions still produce strong artifacts. In this work, we propose a wide-baseline video stitching algorithm that is temporally stable and tolerant to strong parallax. Our key insight is that stitching can be cast as a problem of learning a smooth spatial interpolation between the input videos. To solve this problem, inspired by pushbroom cameras, we introduce a fast pushbroom interpolation layer and propose a novel pushbroom stitching network, which learns a dense flow field to smoothly align the multiple input videos with spatial interpolation. Our approach outperforms the state-of-the-art by a significant margin, as we show with a user study, and has immediate applications in many areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.
Biography
Wei-Sheng Lai (http://graduatestudents.ucmerced.edu/wlai24/) is a Ph.D. candidate of Electrical Engineering and Computer Science at the University of California Merced, under the advisement of Prof. Ming-Hsuan Yang. He received the B.S. and M.S. degree in Electrical Engineering from the National Taiwan University, Taipei, Taiwan, in 2012 and 2014, respectively. His research interests include computer vision, computational photography, and machine learning.
April 26
Human in the Loop: How to Satisfy User Comfort Requirements and Save Energy, Claudia Chitu Ph.D. Candidate, Visiting Fulbright Scholar, Electrical Engineering and Computer Science Department, University of California, Merced & Electronics, Telecommunications and Information Technology, University Politechnica of Bucarest, Romania
Host: Dr. Alberto Cerpa
Abstract
TBD
Biography
TBD
May 3
Adaptive and Curious Deep Learning for Perception, Action, and Explanation,Dr. Trevor Darrell, Professor, Director Berkeley Deep Drive (BDD), Co-Director Berkeley Artificial Intelligence Research (BAIR), Computer Science Department, University of California Berkeley
Host: Dr. Shawn Newsam
Abstract
TBD
Biography
TBD
May 10
Smart Buildings: HVAC Occupancy and Comfort-Based Model Predictive Control, Ashish Yadav, Graduate Student, Electrical Engineering and Computer Science Department, University of California, Merced
Host: Dr. Alberto Cerpa
Abstract
TBD
Biography
TBD
__
Fall 2018
EECS 290 (Fall 2018) Webpage: https://www.asarif.com/courses/eecs290/fall2018.html.
"Learning-Compression" Algorithm and Its Application for Neural Network Pruning, Yerlan Idelbayev, EECS, UC Merced
Abstract
In this talk we will discuss model compression in general and an algorithm to achieve it, and its specific case for neural network pruning. Pruning a neural net consists of removing weights with the goal of minimally degrading its performance. This is an old problem of renewed interest because of the need to compress ever larger nets so they can run on mobile devices. We formulate pruning as an optimization problem of finding the weights that minimize the loss while satisfying a pruning cost condition. We give a generic algorithm to solve this which alternates "learning" steps that optimize a regularized, data-dependent loss and "compression" steps that mark weights for pruning in a data-independent way. Using a single pruning-level user parameter, we achieve state-of-the-art pruning in LeNet-s and ResNet-s of various sizes.
Biography
Yerlan Idelbayev is a 3rd year PhD student of EECS department studying under the supervision of Prof. Miguel Carreira-Perpinan. Before UC Merced he studied Information Systems in International IT University in Almaty, Kazakhstan, and Computer Science in UC San Diego. His research interests consist of nonlinear optimization, neural networks and their compression.
Interactive Exploratory Analytics of Big Spatial Data, Dr. Ahmed Eldawy, Assistant Professor, University of California, Riverside
Abstract
Recently, there has been a tremendous growth in the amount of big spatial data that are acquired by different sources such as satellites, IoT sensors, smartphones, autonomous cars, and others. For decades, end-users were familiar with an interactive exploratory interface that allows them to apply spatial operations and explore the results in real-time. However, the increasing volume of the data makes it unpractical to provide the desired exploratory and real-time interface. This talk presents a new system paradigm that overcomes the limitation of the existing system by providing an approximate and incremental query processing for big spatial data. The system consists of three modules, synoptic computation, incremental indexing, and interactive visualization. The synoptic computation module scales up the query processing by providing a real-time approximate answer over small-size synopses of the data such as samples and histograms. The incremental indexing module works in the background and incrementally organizes the data over a cluster of machines to speed up the query processing. Finally, the interactive visualization module presents the results in a visual format which allows the users to inspect the query answers. Preliminary results on the proposed system show that it can bridge the gap between the user requirements for interactivity and the increasing volume of big spatial data.
Biography
Ahmed Eldawy is an Assistant Professor in Computer Science at the University of California Riverside. His research interests lie in the broad area of databases with a focus on big data management and spatial data processing. Ahmed is the main inventor of SpatialHadoop, the most comprehensive open source system for big spatial data management. Ahmed has many collaborators in industrial research labs including Microsoft Research and IBM Watson. He was awarded the best poster award in SIGSPATIAL 2017, Quality Metrics Fellowship in 2016, Doctoral Dissertation Fellowship in 2015, and Best Poster Runner-up award in ICDE 2014.
Indoor Human Information Acquisition from Physical Vibrations, Dr. Shijia Pan, Postdoctoral Fellow, Carnegie Mellon University
Abstract
The number of everyday smart devices (such as smart TV, Samsung SmartThings, Nest, Google Home) is projected to grow to the billions in the coming decade. The Cyber-Physical Systems or Internet of Things systems that consist of these devices are used to obtain human information for various smart building applications. Different sensing approaches have been explored, including vision-, sound-, RF-, mobile-, and load-based methods, to obtain various indoor human information. From the system perspective, general problems faced by these existing technologies are their sensing requirements (e.g., line-of-sight, high deployment density, carrying a device) and intrusiveness (e.g., privacy concerns).
My research focuses on non-intrusive indoor human information acquisition through ambient structural vibration, which is referred to as "structures as sensors". People's interaction with structures in the ambient environment (e.g., floor, table, door) induces those structures to vibrate. By capturing and analyzing the vibration response of structures, we can indirectly infer information about the people and their actions that cause it. However, challenges remain. Due to the complexity of the physical world (in this case, both structures and people), sensing data distributions can change significantly under different sensing conditions. Therefore, from the data perspective, accurate information learning through a pure data-driven approach requires a large amount of labeled data, which is costly and difficult if not impossible to obtain in real-world sensing applications. My research addresses these challenges by combining physical knowledge and data-driven approaches. Specifically, my system can robustly learn human information from limited labeled data distributions by iteratively expanding the labeled dataset. With insights into the relationship between changes of sensing data distributions and measurable physical attributes, the iterative algorithm guides the expansion order by measured physical attributes to ensure a high learning accuracy in each iteration.
Biography
Dr. Shijia Pan is a postdoctoral researcher at Carnegie Mellon University. She received her Bachelor's degree in Computer Science and Technology from University of Science and Technology of China and her Ph.D. degree in Electrical and Computer Engineering at Carnegie Mellon University. Her research interests include cyber-physical systems, Internet-of-Things (IoT), and ubiquitous computing. She worked in multiple disciplines and focused on indoor human information acquisition through ambient sensing. She has published in both top-tier Computer Science ACM/IEEE conferences (IPSN, UbiComp) and high-impact Civil Engineering journals (Journal of Sound and Vibration, Frontiers Built Environment). She is the recipient of numerous awards and fellowships, including Rising Stars in EECS, Nick G. Vlahakis Graduate Fellowship, Google Anita Borg Scholarship, Best Poster Awards (SenSys, IPSN), Best Demo Award (Ubicomp), Best Presentation Award (SenSys Doctoral Colloquium), and Audience Choice Award (BuildSys) from ACM/IEEE conferences.
Augmenting Collaborations with Social Computing Interaction Designs of Communication Channels, Dr. Hao-Chuan Wang, Associate Professor, University of California, Davis
Abstract
Collaboration and communication gaps, ranging from difficulties in expressing oneself or understanding another person, to failure in coordinating actions in teamwork, are prevalent problems to individuals and organizations. While improving personal communication skills continues to be important, designing digital communication channels to afford what group collaboration needs, can offer solutions with scalability and cost-efficiency. In this talk, I will conceptualize social computing interaction design as a meta-solution to shape group behaviors toward more desirable processes and outcomes. I will demonstrate the approach with our recent work tackling different tasks and contexts, such as creative brainstorming, cross-lingual conversation, and generating and understanding referential expressions in remote collaborative work.
Biography
Hao-Chuan Wang is an Acting Associate Professor in the Department of Computer Science, University of California, Davis. Before joining UC Davis, he was an Associate Professor in National Tsing Hua University, Taiwan (NTHU), Taiwan from 2012 to 2018. He's also affiliated with National Taiwan University (NTU)'s IoX Research Center as a Principal Investigator. He has formed international collaborations with peer researchers in North America and Asia, as well as industrial collaborations with Intel Labs, Microsoft Research, and Google. Dr. Wang's main research interest lies in the collaborative and social aspects of Human-Computer Interaction (HCI). His work integrates system design and the behavioral sciences of social computing research for problem solving and value creation. His recent projects include system designs for supporting multilingual collaboration, motion sensing-based analytics for studying non-verbal behaviors in mediated conversations, and studies of interpersonal knowledge transfer for augmenting human work in the future. Dr. Wang is an active member of international and regional HCI communities, including ACM SIGCHI, CSCW and Chinese CHI. He also served as a member in the Steering Committees of CSCW and Chinese CHI, and was a Subcommittee Chair for ACM CHI 2017 and 2018.
Characterization and Modeling of Error Resilience in HPC Applications Luanzheng Guo, EECS, UC Merced
Abstract
As HPC systems scale in size and power, the danger of silent errors, i.e., errors that can bypass hardware detection mechanisms and impact application state, grows dramatically. Consequently, applications running on HPC systems need to exhibit resilience to soft errors. Previous work has found that, for certain codes, this resilience can come for free, i.e., some applications are naturally resilient. However, we still lacks fundamental understanding on the program constructs that result in such natural error resilience. Understanding such nature resilience is critical for error detection and recovery to avoid overprotecting regions of code that are naturally resilient.
In this talk, we will present our research efforts to capture and characterize application natural resilience, based on which we can quantify and model application resilience. This talk has two parts. In the first part, will discuss FlipTracker, a framework designed to extract resilience code patterns using fine-grained tracking of error propagation and resilience properties. The framework and patterns enable a deeper understanding of resilience properties of applications. We also show how we can guide application design towards natural resilience using resilience code patterns.
In the second part, we will discuss PARIS, a resilience prediction method that makes resilience predictions of fault manifestations using resilience code patterns and machine learning models. PARIS can predict the possibility of all fault manifestations, while the state-of-the-art resilience prediction model cannot. PARIS is also much faster (up to 450x speedup) than the traditional method (i.e., random fault injection).
Biography
Luanzheng Guo is a Ph.D. student of Computer Science at the University of California Merced. His study is under the supervision of Professor Dong Li. His research area is High Performance Computing System with a focus on fault tolerance in large-scale parallel systems. During his Ph.D. study, his poster was nominated as the best poster candidate in SC'16. He is a lead student volunteer in SC18. He is a reviewer of a couple of prestigious conferences and international journals. He was recognized as an outstanding reviewer by Elsevier in 2018. He was a summer intern at Lawrence Livermore National Laboratory in 2015-2018. Recently, his research is featured by HPCwire in its What's New in HPC Research. He is a student member of IEEE, ACM, and SIGHPC.
Walnut Rootstock Development for Sustainable Nut Production: What Things Are, What They Look Like and Why Big Data, Dr. Andreas Westphal, Assistant Cooperative Extension Specialist, Assistant Nematologist, Kearney Agricultural Research and Extension Center
Abstract
Walnut is under constant attack by soil-borne plant pathogens including crown gall, root rots, and plant-parasitic nematodes. Because of the lifetime expectancy of walnut orchards of at least three to four decades, a high level of sustainable management and mitigation strategies for these soil-dwelling nematodes are paramount. Using rootstocks with elevated resistance and tolerance to all of these damaging organisms is an environmentally friendly and sustainable approach to reduce reliance on costly and possibly environment impacting management practices. Built on previous successes, a group of researchers from several UC campuses, USDA-ARS, the California State University of Fresno, and UCANR has formed to investigate the potential of walnut germplasm (Juglans spp.) to generate such rootstocks. Interspecific crosses within Juglans have been made, taken into tissue culture by embryo rescue, and regenerated into clonal plants. Recent efforts have focused on two breeding populations with ca. 300 genotypes of clonal offspring. These are characterized for responses against different soil-borne pathogens including Crown gall, Phytophthora root rots, and plant-parasitic nematodes. In parallel, each genotype is sequenced to create a genotypic map. As soon as phenotypic maps become available, these will be overlaid on the genotypic map to identify quantitative trait loci (QTL). Depending on the time necessary for the pathogen testing, progress varies among pathogen systems. Goal of these efforts are to improve breeding strategies, release new superior rootstocks, and to convey information on plant utility and economics to the walnut stakeholders.
Biography
Andreas Westphal is a native to Germany. He completed his College and early University training in Germany before pursuing his Ph.D. in the US. For two decades, he has been working in several nematode-host plant systems. His research endeavors encompass nematode management in several annual crops including maize, potato, small grains, soybean, sugar beet, watermelon and others. After a scientist role at the German resort research institute "Julius Kühn-Institut", he focused his research emphasis on host plant resistance and tolerance research in perennial crops. Since his employment with UC Riverside in 2015, he directs selection efforts for nematode resistance and tolerance in Walnut, Prunus, Pistachio, and grape. He also conducts management research in these crops.
Microscope on Memory: FPGA Acceleration of Computer Memory System Assessments, Dr. Maya B. Gokhale, Distinguished Member of Technical Staff, Lawrence Livermore National Laboratory
Abstract
Recent advances in new memory technologies and packaging options has focused attention on computer memory system design and evaluation. Examples include high bandwidth memories such as Hybrid Memory Cube and HBM, 3DXpoint non-volatile memory, STT-MRAM, and ReRAM. Emerging memories display a wide range of bandwidths, latencies, and capacities, making it challenging for the computer architect to navigate the design space of potential memory configurations, and for the application developer to assess performance impact of complex memory systems.
The Logic in Memory Emulator (LiME) is an FPGA-based hardware/software tool specially designed for memory system evaluation and experiment. LiME uses the Xilinx Zynq UltraScale+ Multi Processor System on Chip (MPSoC) to capture any/all memory access, either from the CPU (Processing System or PS) or the FPGA (Programmable Logic or PL). LiME employs novel loopback circuitry in conjunction with address map relocation to pass memory references from the PS into the PL side. The memory request is looped back into the PS DRAM memory controller and concurrently processed by LiME.
We have demonstrated three high value use cases: non-intrusive memory access logging, emulation of multiple memory systems by passing the memory request through delay registers before entering the PS memory subsystem, and emulation of acceleration engines that can independently access memory. In this talk, I will describe this novel application of state-of-the-art FPGA embodied by the LiME framework and highlight its uses.
Biography
Maya Gokhale is Distinguished Member of Technical Staff at the Lawrence Livermore National Laboratory, USA. Her career spans research conducted in academia, industry, and National Laboratories. Maya received a Ph.D. in Computer Science from University of Pennsylvania. Her current research interests include data intensive architectures and reconfigurable computing. Maya's Streams-C programming language and compiler was adoptd by Impulse Accelerated Technologies as the basis for Impulse C. Maya is co-recipient of an R&D 100 award for the Trident C-to-FPGA compiler, co-recipient of four patents related to memory architectures for embedded processors, reconfigurable computing architectures, and cybersecurity, and co-author of more than one hundred technical publications. Maya is a member of Phi Beta Kappa and a Fellow of the IEEE for contributions to reconfigurable computing technology.
Limited-memory Quasi-Newton Optimization Methods for Deep Learning Jacob Rafati Heravi EECS, UC Merced
Abstract
Deep learning algorithms often require solving a highly non-linear and nonconvex unconstrained optimization problem. Generally, methods for solving the optimization problems in deep learning are restricted to the class of first-order algorithms, like stochastic gradient descent (SGD). SGD methods has several drawbacks such as undesirable effect of not escaping saddle-points and requirement for tuning so many hyper parameters. Using the second-order curvature information to find the search direction can help with more robust convergence for the non-convex optimization problem. However, computing the Hessian matrix for the large-scale problems in not computationally practical. Alternatively, quasi-Newton methods construct an approximate of Hessian matrix to build a quadratic model of the objective function. Quasi-Newton methods, like SGD, require only first-order gradient information, but they can result in superlinear convergence, which makes them attractive alternatives for solving the non-convex optimization problem in deep learning. In this talk, I will introduce limited-memory quasi-Newton optimization methods that are efficient for deep learning problems such as classification and regression of big data.
Biography
Jacob Rafati is a Ph.D. candidate in the Electrical Engineering and Computer Science program at the University of California, Merced. He is also a member of Dr. David C. Noelle’s Computational Cognitive Neuroscience Lab. His research focus is on Optimization, Machine Learning and Reinforcement Learning. This talk is based on his recent collaborative research work that involves investigating and implementing alternative optimization methods for large-scale machine learning problems, such as deep learning and deep reinforcement learning. For more details about this project visit his website at http://rafati.net.
Artificial Intelligence: How Customer Reactions Impact Innovation, Dr. Lisa Yeo, Assistant Professor, University of California, Merced
Abstract
Artificial Intelligence (AI) technologies are often included as product features (e.g., facial and voice recognition, autonomous driving) that drive product and service innovation. However, such innovations increase software complexity, leading to security and privacy issues. Customer reactions to security or privacy failures may affect product demand; customer demand reaction to the security or privacy implications of new features, such as AI-driven technology, plays a role in regulating the rate of innovation. This work examines the trade-off between product innovation and the increased risk of security breaches in AI-enabled products and services.
Biography
Dr. Lisa Yeo is an Assistant Professor in the Ernest & Julio Management program at UC Merced who works to help organizations understand how to safely govern the data and information they need to compete. By focusing on people and process, Lisa believes that organizations can design and build information systems that make it easy to protect privacy and prevent security breaches without requiring extensive investments in security layers after the fact. Lisa has worked in information security for 15 years as both a technical specialist and a business advisor. During this time, she wrote the book Personal Firewalls, protected the infrastructure for the Alberta Legislature, and guided the secure connection of all public libraries in Alberta as part of the Alberta SuperNet project. Lisa holds a B. Math in Applied Math from the University of Waterloo and an MBA and PhD (Operations & Information Systems) from the University of Alberta.
Remote Sensing Image Analysis Based on Deep Learning, Dr. Dengfeng Chai, Associate Professor, Institute of Spatial Information Technique, Zhejiang University
Abstract
Extracting information of interest from remote sensing images, such as taken from satellites or aircraft, is a long-standing problem, yet one which has benefited significantly over the past few years from advances in deep learning. In this talk, we cover some of the advances in remote sensing image analysis based on deep learning and present our recent research in the field. In particular, we describe our work on cloud and cloud shadow detection in Landsat multi-spectral imagery, and our work on extracting geo-objects of interest from high resolution satellite and aerial imagery.
Biography
Dr. Dengfeng Chai is an associate professor at the Institute of Spatial Information Technique, Zhejiang University, China. He received his PhD from the State Key Lab of CAD&CG at Zhejiang University. Before that, he received his Master’s degree from the State Key Lab of Information Engineering in Surveying, Mapping and Remote Sensing at Wuhan University and his Bachelor’s degree from Wuhan University. He was a postdoctoral fellow at Department of Photogrammetry, University of Bonn, Germany. He is currently a visiting scholar at the University of California, Merced where he is being hosted by Prof. Shawn Newsam. His research interests include photogrammetry and remote sensing, computer vision and pattern recognition, mainly focusing on statistical approaches and spatial models for object recognition and extraction from remote sensing images, especially deep learning based approaches. He is a principle investigator of two projects supported by National Natural Science Foundation of China and one project supported by Zhejiang Provincial Natural Science Foundation of China. He has published at ICCV, CVPR and Pattern Recognition among other venues.
Designing Alternative Sensory Channels: Visualizing Nonverbal Communication through AR and VR Systems for People with Autism, Dr. LouAnne Boyd, Assistant Professor, Chapman University
Abstract
Social communication is one key component to success and happiness. Our ability to express our needs and wants as well as understand others is central to our connection to one another and our availability to teach and learn. Challenges with social communication puts learning on hold and youth at risk for bullying, social isolation, and potentially serious mental-health concerns. Thus, supporting social skills of people with autism could have a positive impact on both the social and mental wellbeing of individuals with autism. Although much researched has focused on supporting social skills broadly, little attention has been paid to developing effective nonverbal behaviors-which are necessary to initiate, maintain, and gracefully terminate a social interaction. The talk describes the design and evaluate the effect of realtime visualizations of prosody and proximity through three lab-based experiments as well as interviewing the participants and family members about their experience with these novel AR and VR technologies. The results from the interviews with participants and parents about their experiences highlight issues of usability, learnability, and comfortability of the systems culminate in an assistive technology design concept-Sensory Accommodation Framework-which provides four technical mechanisms for supporting sensory perception differences through computation.
Biography
Dr. LouAnne Boyd is an Assistant Professor of Software Engineering and Computer Science Department in Chapman's Schmid College of Science and Technology. Her current research interests in Human-Computer Interaction include designing, developing, and evaluating novel assistive and accessible technologies for neurodiverse users. LouAnne holds a B.A. in psychology from Washington University in St. Louis, a M.A. in psychology from Towson University, and a Ph.D. in Informatics from UC Irvine. She also is a Board Certified Behavior Analyst with over 20 years of professional clinical experience working with neurodiverse people in hospital, school, home, and community settings. Her overarching goal is to promoting diversity and inclusion. To that end, her current HCI research explores technical mechanisms to support sensory accommodation for assistive technology users.
Architectural Study for Deep Learning Era Dr. Hyeran Jeon Assistant Professor, San Jose State University
__
Abstract
Deep learning became the core algorithm of many applications recently. Deep learning enables computing devices to automatically recognize individuals in photos, cars to navigate by themselves, and medical devices to diagnose cancer. With deep learning, software developers do not need to design sophisticated algorithms to extract important features from the input data. Under this computing paradigm transition, researchers need to understand the types of applications that can be accelerated by deep learning and the performance bottlenecks of deep learning applications. In this talk, Dr. Jeon will first introduce a few example deep learning applications that her research group has developed for smart city, secure computing, and medical image processing. In the second part of the talk, she will introduce a new deep learning benchmark suite, Tango, that her research group has recently released. She will show a few in-depth architectural characterization results measured by Tango from various accelerators, which will be helpful for developing a new accelerator design.
Biography
Hyeran Jeon is an Assistant Professor at San Jose State University. Her research interests lie in energy-efficient high-throughput processor and systems design. Recently, she is leading several research projects mainly on the efficient acceleration of deep learning applications and development of new deep learning applications. Her research group is sponsored by the California Energy Commission, Lam Research, NVIDIA, and Xilinx. Before joining San Jose State University, she earned her Ph.D. at the University of Southern California in 2015 and worked for Samsung, AMD, and IBM T.J. Watson Research Center as a systems software engineer and a research intern.
Spring 2018
Jan. 26
Geographic Knowledge Discovery Using Ground-Level Images and Videos, Professor Shawn Newsam, EECS, UC Merced
Abstract
This work investigates social multimedia for geographic knowledge discovery. Specifically, community-contributed ground-level images and videos are used to map what-is-where on the surface of the Earth in much the same way that overhead images taken from air- or space-borne platforms have been used for decades in the traditional field of remote sensing. The overarching premise is that georeferenced social multimedia data can be considered a form of volunteered geographic information. Further, it can enable geographic discovery not possible through traditional means. The framework, termed proximate sensing, is applied to a range of geographic discovery problems including land cover and land use mapping, mapping public sentiment, mapping pet ownership, and mapping human activities. The image and video analysis is performed using state-of-the-art computer vision techniques based on deep learning.
Biography
Dr. Shawn Newsam is an associate professor and founding faculty in Electrical Engineering and Computer Science at the University of California, Merced. He has degrees from UC Berkeley, UC Davis, and UC Santa Barbara and did a postdoc at Lawrence Livermore National Laboratory before joining UC Merced. He is the recipient of a DOE Early Career Scientist and Engineer Award, an NSF Faculty Early Career Development (CAREER) Award, and a Presidential Early Career Award for Scientists and Engineers (PECASE). His research interests include image processing, computer vision, and machine learning particularly as applied to spatial data.
Building Internet of Things Systems via Networked Sensing and Mobile Computing Innovations, Professor Wan Du, EECS, UC Merced
Abstract
It is estimated that the global Internet of Things (IoT) system will connect about 30 billion objects and the global market value of IoT will reach $7.1 trillion by 2020. The deployed IoT systems are changing our lives and how we interact with the surrounding world. In this talk, I will introduce my research on building IoT systems via networked sensing and mobile computing innovations. In an interdisciplinary project, we develop a networked sensing system that measures the water quality of urban reservoirs and the spatial wind distribution over the water surface, which in turn enables real-time monitoring and analysis of water quality for smart cities. Three fundamental research problems have been solved. I worked with my colleagues and first found the best locations for wind sensors by studying the correlation of the wind stress at different locations. 10 wind sensors have been deployed in an urban reservoir of Singapore. To collect data from the deployed sensors, we further developed a sparse wireless networking system that provided adaptive communications over long-distance low-power wireless links and efficient data collection over multi-hop paths. To remotely update the software of the deploy sensors or diffuse a bulk of data to them, we designed a fast data dissemination protocol which significantly improved the data dissemination efficiency by transmitting rateless-encoded packets over constructive interference and pipelining. Besides the above academic achievements, the networked sensing system has been providing essential information for Public Utility Board of Singapore to conduct smart reservoir management, which makes the project socially responsible as well. Finally, I will also introduce two mobile computing systems we have developed to enable some interesting IoT applications.
Biography
Dr. Wan Du is currently an Assistant Professor in Electrical Engineering and Computer Science at the University of California, Merced. He had worked as a Research Fellow in the School of Computer Science and Engineering, Nanyang Technological University, Singapore, from 2011 to 2017. Dr. Du has been doing active research on IoT system development, especially networked sensing and mobile computing. His representative research projects include the deployment of a water quality monitoring system in urban reservoirs, visible light communication based on smartphones, smartphone-based activity profiling system, etc. He is also working on two data analytics projects for urban computing. A number of high quality research papers have been published in reputed conferences including ACM MobiCom, ACM SenSys, ACM MobiHoc; ACM/IEEE IPSN; IEEE INFOCOM, IEEE ICDCS; and journals including IEEE/ACM Transactions on Networking, IEEE Transactions on Mobile Computing, IEEE Transactions on Wireless Communications, ACM Transactions on Sensor Networks, etc. His research of water quality monitoring system has received the best paper award in ACM SenSys 2015 and the best demo award in IEEE SECON 2014. He has also received the Distinguished Technical Program Committee (TPC) member award of IEEE INFOCOM 2018.
For Better or Worse, Richer or Poorer: The Future of Tech for Good, Dr. Brandie Nonnecke, UC Berkeley CITRIS Director of Tech for Social Good
This talk is part of the EECS | CITRIS Frontiers in Technology Series.
Abstract
We have a complicated relationship with tech. Throughout history, technological advancements have helped us address some of our most pressing challenges, but its application has also created new ones. "ATech + Human Love Story" will share examples of how tech--from AI and digital identity systems to social media platforms--can be applied to change our world for good, but also provides caution on how tech must be designed and applied in ways that are inclusive, fair and just.
Biography
Dr. Brandie Nonnecke is the Research & Development Manager for CITRIS, UC Berkeley and Program Director for CITRIS, UC Davis. She is a Fellow at the World Economic Forum where she serves on the Council on the Futureof the Digital Economy and Society. Brandie researches human rights at the intersection of law, policy, and emerging technologies. Her current research is focused on the benefits and risks of AI-enabled decision-making, including issues of fairness, accountability, and appropriate governance structures. She has published research on algorithmic-based decision-making for public service provision in the urban context and outlined recommendations for how to better ensure application of AI to support equity and fairness. She is also researching ethics of biometric-based digital identity systems and recently published a piece highlighting the risks of digital ID systems for refugees.
The Psychology of Input and Interaction of/with Text and Numbers,Professor Ahmed Sabbir Arif, EECS, UC Merced
Abstract
Text entry has become an essential part of our daily life. Nowadays, we input text and on/with various devices, in both stationary and mobile settings. Since the process of text entry involves both cognitive and motor skills and requires a close cooperation between the system and the user, an understanding of both factors is necessary to develop more efficient input techniques. In this talk, I will discuss the development of a model that accounts for the most important human and system factors to predict text entry performance. I will demonstrate how this model was used to identify and address bottlenecks in text entry performance by making subtle changes in the user interfaces. I will then shift focus to interaction with text end numbers. Data exploration is an integral part of uncovering the secrets and structure of scientific datasets. However, this process is challenging, especially for non-experts who are coming into an expert domain. I will discuss how the common coding theory can be exploited in user interfaces to facilitate collaborative learning, conceptual understanding, and exploration and discovery in different datasets, including gene expressions and metabolic pathways. Finally, I will conclude reflecting on future directions of my research.
Biography
Ahmed Sabbir Arif is an Assistant Professor of Electrical Engineering and Computer Science at UC Merced. As a researcher, his goal is to make computer technologies accessible to everyone by developing intuitive input and interaction techniques. A major thread of his work focuses on smarter solutions for text entry. His other interests include tangible user interfaces, mobile interaction, child-computer interaction, usable security, and data visualization. His research has contributed towards the development of more reliable interactive systems and influenced practices. He has received many prestigious awards for his research, including the Michael A. J. Sweeney Award and the CHISIG Gitte Lindgaard Award. Before joining UC Merced, he was a Postdoctoral Fellow at Ryerson University. He was also an NSERC ENGAGE Postdoctoral Fellow at Flowton Tech and a Research Intern at Microsoft Research, Redmond.
No seminar
Abstract
No Seminar
Biography
No Seminar.
Plug-and-play Irrigation Control at Scale, Daniel Winkler, EECS, UC Merced
Abstract
Lawns, also known as turf, cover an estimated 128,000 square kilometers in North America alone, with landscape requirements representing 30% of freshwater consumed in the residential domain. With this consumption comes a large amount of environmental, economic, and social incentive to make turf irrigation systems as efficient as possible. Recent work introduced the concept of distributed control in irrigation systems, but existing control strategies either do not take advantage of the distributed control, or don’t revise the strategy over time in response to collected data. In this work, we introduce PICS, a data-driven control strategy that self-improves over time, adapts to the local specific conditions and weather changes, and requires virtually no human input in both setup and maintenance providing a plug-and-play system that requires minimal pre-deployment efforts. In addition to substantial improvements in ease-of-use, we find across 4 weeks of large-scale irrigation system deployment that PICS improves irrigation system efficiency by 12.0% in comparison to industry best and 3.3% in comparison to academic state-of-the-art. Despite using less water, PICS also was found to improve quality of service by a factor of 4.0x compared to industry best and 2.5x compared to academic state of the art.
Biography
Daniel Winkler received his BS in Computer Science Engineering with honors from UC Merced in 2013. An ACM member, he since has been pursuing his PhD under advisement of Dr. Alberto Cerpa in UC Merced's ANDES Lab. Although his current research focuses on intelligent design and management of turf irrigation systems through the use of embedded devices, Daniel maintains a diverse interest in general resource management applications.
Simulating virtual crowds with 100,000 agents in real-time on your laptop,Tomer Weiss, CS, UCLA
Abstract
The movement of large numbers of people is important in many situations, such as the evacuation of a building in an emergency, urban planning, and visual effects. Since laboratory experiments are not readily available, most research is conducted by means of computer simulations of crowds. Graphics researchers and others have proposed many simulation models. However, most of these models are tailored for specific scenarios, and are computationally expensive. One of the main challenge stems from the difficulty in leveraging all these into a unified model that scales and works well for both sparse and dense crowds. In this talk, I focus on my recent work in developing a position-based framework for crowd simulation. I demonstrate the framework's strengths by simulating large crowd masses in interactive rates for hundreds of thousands of agents, which was previously unachievable. This new method is suitable for use in interactive games, and was recently presented in the ACM SIGGRAPH conference on Motion in Games 2017, where it received the best paper award.
Biography
Tomer Weiss is a PhD candidate at the University of California Los Angeles, scheduled to defend this thesis in this year. He received the best paper award from the ACM SIGGRAPH conference on motion in games, for his work on virtual crowd simulation. He received his BSc degree in computer science from Tel Aviv University in 2013, and MS in Computer Science from UCLA in 2016. His research interests include computer graphics and optimization methods. He is a member of the UCLA Computer Graphics & Vision Laboratory, directed by Professor Demetri Terzopoulos.
Autonomous Scooter Design for People with Mobility Challenges,Professor Kaikai Liu, San Jose State University
Abstract
People with mobility challenges, for example, the elderly, blind, and disabled, face a multitude of challenges every day that can prevent them from getting where they want to go. Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. Riders often face challenges operating scooters in certain indoor and crowded places, especially on sidewalks with numerous obstacles and pedestrians. People with certain disabilities, such as the blind, are often unable to drive their scooters. In this talk, we will discuss our ongoing work in designing a cutting-edge autonomous scooter. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To solve the discrepancies of system complexity, sensor coverage, and resolution, we propose solutions for object mapping and recognition under various spatial and lighting conditions. Solving these challenges will enable the scooter to both travel within buildings and perform tight maneuvers through densely crowded areas automatically. We hope our system will allow people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings.
Biography
Kaikai Liu is an assistant professor in the Department of Computer Engineering since August 2015. His research interests include Mobile and Cyber-Physical Systems (CPS), Smart and Intelligent Systems, Internet-of-Things (IoT), Software-Defined Computing and Networking. He has published over 20 peer-reviewed papers in journals and conference proceedings, 1 book, and holds 4 patents (licensed by three companies). He developed several prototype systems from scratch, for example, emergency communication systems for the smart city, Ultra-wideband system for search and rescue victims, indoor localization and navigation. He is a recipient of the Outstanding Achievement Award at UF (four times), the Apple WWDC Scholarship (2013 and 2014), the Innovator Award from the Office of Technology Licensing at UF (2014), the Top Team Award at NSF I-Corps Winter Cohort (Bay area, 2015), the 2015 Gator Engineering Attribute Award for Creativity at UF, IEEE SWC 2017 Best Paper Award, IEEE SECON 2016 Best Paper Award, ACM SenSys 2016 Best Demo - Runner up, 2016 CoE Kordestani Endowed Research Professor, 2017 and 2018 CoE Research Professor Award.
A Robot Character for Every Home, Mark Palatucci Co-Founder/Head of Cloud AI and Machine Learning at Anki
This talk is part of the EECS | CITRIS Frontiers in Technology Series.
Abstract
For the past several decades, consumer applications of robotics have been more science fiction than reality. However, recent developments in deep-learning, cloud AI, and plummeting prices of both computation and sensing have created the necessary components for a rapidly growing consumer robotics industry to finally emerge. In this talk, I’ll discuss the evolution of Anki from 3 Ph.Ds and a kitchen table prototype, to a global company that has quickly become the 2nd largest producer of consumer robots in the world. I’ll share many of the successes and challenges of producing robots at million+ unit scale, and the important trends that will impact both academia and industry. I’ll talk about the importance of emotion and character for building a great user experience, and some surprising findings about human-robot interaction. I’ll also discuss Anki’s unique “bottom’s up approach" to robotics, and show how with an increasingly complicated series of low-cost mass-market robots, we’ve created a virtuous cycle that’s driving growth in the industry and moving to a future with intelligent, emotive, robot characters for every home.
Biography
Mark Palatucci is the Co-founder and Head of Cloud AI and Machine Learning at Anki. While at Anki, he led the software teams that developed award winning products including Anki Overdrive and Cozmo. He is an inventor on multiple US Patents, and was awarded Ph.D fellowships from the National Science Foundation and Intel Corporation for his research on machine learning. Mark earned a bachelor’s degree in computer science from the University of Pennsylvania and a M.S and Ph.D in Robotics from Carnegie Mellon University.
An Exciting Future: At the crossroads of people, profit, planet and petabytes of data, Chandrakant Patel Chief Engineering and Senior Fellow, Hewlett-Packard.
This talk is part of the EECS | CITRIS Frontiers in Technology Series.
Abstract
Humanity will face more change over the next 15 years than in all of human history to date. The world will be deeply affected by population increase, shifting resource constraints, rapid urbanization, changing demographics, hyper globalization and sustainability challenges. Moreover, externalities such as environmental pollution, natural disasters and military conflicts will increasingly become a burden to society. In this talk, I will outline the megatrends, and examine the role of future cyber physical systems in addressing these 21st century megatrends. I will seek to drive a vigorous conversation on the role of physical fundamentals and information technologies in instantiating systemic innovations that make life better for everyone. I will close with a perspective on an idea-to-value framework that builds on lessons I have learnt in my career in Silicon Valley.
Biography
Chandrakant is currently the Chief Engineer and Senior Fellow of HP Inc. Chandrakant has led HP Labs in delivering innovations in chips, systems, data centers, storage, networking, print engines and software platforms. He is a pioneer in thermal and energy management in data centers, and in the application of the information technology for available energy management at city scales. Chandrakant is an ASME and an IEEE Fellow, and has been granted 151 patents and published more than150 papers. An advocate of return to fundamentals, he has served as an adjunct faculty in engineering at Chabot College, U.C. Berkeley Extension, San Jose State University and Santa Clara University. In 2014, Chandrakant was elected to the Silicon Valley Engineering Hall of Fame.
Computational Approaches toward Better Drugs and Better Health Care, Professor Xia Ning Indiana University - Purdue University Indianapolis
Abstract
Drug development and responsible drug use represent critical issues for health care. Drug development has been extremely costly and of extremely low success rate. Even after successful development and FDA approval, many marketed drugs do not introduce equal efficacy on different patients. In this talk, we will present how computational approaches can help accelerate drug development and facilitate precision drug selection. In specific, we will discuss a new ranking framework and ranking methods to prioritize drug candidates when multiple criteria are considered (e.g., drug bioactivity and selectivity). We will also discuss a new ranking-based approach to selecting effective cancer drugs for different patients.
Biography
Xia Ning is an Assistant Professor in the Department of Computer and Information Science (CIS) at the Indiana University – Purdue University Indianapolis (IUPUI). She received her Ph.D. from University of Minnesota, Twin cities, in 2012. From 2012 to 2014, she worked as a research staff member at NEC Labs, America. In Fall 2014, she joined IUPUI. She is also affiliated with the Center for Computational Biology and Bioinformatics (CCBB), Indiana University, and Regenstrief Institute. Ning’s research is on Data Mining, Machine Learning and Big Data analysis with applications on Chemical Informatics, Bioinformatics, Health Informatics and e-commerce, etc., and has been highly interdisciplinary. In specific, Ning’s research focuses on developing scalable models and computational methods to derive knowledge from heterogeneous Big Data, conduct modeling, ranking, classification and prediction, etc., and ultimately solve critical and real high-impact problems. Specific research topics include drug candidate prioritization for drug discovery, cancer drug selection for precision medicine, and information retrieval from electronic medical records.
Hidden Two-Stream Convolutional Networks for Action Recognition, Yi Zhu, EECS, UC Merced
Abstract
Analyzing videos of human actions involves understanding the temporal relationships among video frames. Convolutional Neural Networks (CNNs) are the current state-of-the-art methods for action recognition in videos. However, the CNN architectures currently being used have difficulty in capturing these relationships. State-of-the-art action recognition approaches rely on traditional local optical flow estimation methods to pre-compute motion information for CNNs. Such a two-stage approach is computationally expensive, storage demanding and not end-to-end trainable.
In this talk, I will first describe the literature and challenges of video classification, and then introduce the motivation of our work. Then I will present a novel CNN architecture that implicitly captures motion information between adjacent frames. This new module can be plugged into any state-of-the-art action recognition framework. We name our approach hidden two-stream CNNs because it takes raw video frames as input and directly predicts action classes without explicitly computing optical flow. We show that our end-to-end approach is 10x faster than a two-stage one, and requires significantly less storage since optical flow does not need to be saved. We present experimental results on four challenging action recognition datasets: UCF101, HMDB51, THUMOS14 and ActivityNet v1.2. Our approach is shown to significantly outperform the previous best real-time approaches.
Biography
Yi Zhu is a PhD student in the EECS program at UC Merced. Since 2014, he has been working with Professor Shawn Newsam towards his PhD degree on computer vision. His research is focused on video action recognition/detection, optical flow/depth estimation and geospatial knowledge discovery.
Online Partial Throughput Maximization for Multidimensional Coflow, Maryam Shadloo, EECS, UC Merced
Abstract
Coflow has recently been introduced to capture communication patterns that are widely observed in the cloud and massively parallel computing. Coflow consists of a number of flows that each represents data communication from one machine to another. A coflow is completed when all of its flows are completed. Due to its elegant abstraction of the complicated communication processes found in various parallel computing platforms, it has received significant attention. In this talk, we optimize coflow for the objective of maximizing partial throughput. This objective seeks to measure the progress made for partially completed coflows before their deadline. Partially processed coflows still could be useful when their flows send out useful data that can be used for the next round computation. In our measure, a coflow is processed by a certain fraction when all of its flows are processed by the same fraction or more. We consider a natural class of greedy algorithms, which we call myopic concurrent. The algorithms seek to maximize the marginal increase of the partial throughput objective at each time. We analyze the performance of our algorithm against the optimal scheduler. In fact, our result is more general as a flow could be extended to demand various heterogeneous resources. Our experiment demonstrates our algorithm’s superior performance.
Biography
Maryam Shadloo is a PhD student in the EECS program at UC Merced. Since 2014, she has been working in the area of theoretical computer science under the supervision of Prof. Sungjin Im. Specifically, she is interested in designing approximation and online algorithms for algorithmic problems arising in scheduling and resource allocations.
Optimizing Thread Management on GPUs, Dr. Guoyang Chen, Alibaba Research
Abstract
Heterogeneous architectures, especially Graphic Processing Units (GPU), has been embraced in modern computing for its tremendous computing power. Two factors critical for the performance of heterogeneous computing are memory performance and threads management. The former is essential for maximizing effective memory bandwidth of heterogeneous computing, while the latter is important for leveraging architectures’ concurrency and enabling flexible scheduling policies while minimizing runtime overhead. In this talk, I will focus on introducing two techniques for optimizing thread managements on GPUs. One is called Free Launch, which is designed for overcoming the shortcomings of hardware-based dynamic parallelism and the other one is named Effisha, which is a framework to enable block-level preemptive scheduling on GPU with little runtime overhead.
Biography
Dr. Guoyang Chen is currently a senior engineer (researcher) working on accelerating emerging machine learning algorithms on heterogeneous platforms at Alibaba Group US Inc. He received his Ph.D. degree of computer science from North Carolina State University (NCSU) in 2016 and BS degree of computer science from University of Science and Technology of China in 2012. His research interests span the areas of high performance computing, compiler optimizations and system architecture, with a focus on enabling both efficient software supports and program optimizations for heterogeneous computing and data intensive applications (e.g, data analytics and machine learning applications). Along those directions, he has developed novel solutions for data placement problems on GPU, enabling preemptive scheduling on GPU, eliminating the large overhead in dynamic parallelism on GPU, and systematic treatment to position independence on Non-Volatile Memory (NVM). His works have been published in 10+ top conferences and journals in both computer system and data engineering areas, such as MICRO, PPoPP, ICS, and ICDE.
__
Fall 2017
Aug. 25
Introduction to EECS 290, Mukesh Singhal, UC Merced
Sept. 15
Scalable Asynchronous Gradient Descent Optimization for Big Models, Torres Martin, UC Merced
Abstract
The number of features in models has been steadily growing and it is now common to see models with millions or even billions of features. However, existing data analytics systems approach predictive model training exclusively from a data-parallel perspective by partitioning the data to multiple workers and executing computations concurrently over different partitions. Although various synchronization policies are used to emphasize speedup or convergence, there is little attention on model management and its importance for effective training. In this work, we present a general framework for parallelizing stochastic optimization algorithms over massive models that cannot fit in memory by vertically partitioning the model offline and asynchronously updating the resulting partitions online. We identify suboptimal behavior in the naive implementation and minimize concurrent requests to the common model by introducing a preemptive push-based sharing mechanism. Our experimental results show improved convergence over HOGWILD! for both real and synthetic datasets and is the only solution scalable to massive models. .
Biography
Martin Torres is a PhD student in the EECS graduate group at UC Merced, working with Prof. Florin Rusu. He received his BS in Computer Science and Cognitive Science from California State University Stanislaus. His research includes large-scale data analytics, focusing on optimizing various machine learning algorithms across different architectures and systems. .
Sept. 22
Urban Impervious Surface Extraction Using High-Resolution Remote Sensing Images, Dr. Zhenfeng Shao, State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University
Abstract
Impervious surfaces are anthropogenic features through which water cannot infiltrate into the soil. Impervious surface is a significant indicator of the degree of urbanization and the quality of the urban eco-environment. The rapid development of urbanization brings massive expansion of impervious surfaces, influencing the regional eco-environment and restricting regional sustainable development.
This talk will focus on methods for urban impervious surface extraction from high resolution remote sensing images. An object-oriented framework is proposed. Buffalo in America and other cities in China are selected as case study areas. Various high-resolution images including IKNOS, GeoEye, GF-1, GF-2, ZY3 and other mapping satellites are used. The challenges and future work such as impervious surfaces dynamic monitoring will be discussed.
Biography
Zhenfeng Shao, a full Professor at Wuhan University, China, received the Ph.D. degree in photogrammetry and remote sensing from Wuhan University, China, in 2004, working with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS). He has published more than 40 peer-reviewed articles in international journals. His research interests include high-resolution image processing and remote sensing applications.
Dr. Shao was a recipient of the Talbert Abrams Award for the Best Paper in Image Matching from the American Society for Photogrammetry and Remote Sensing in 2014, and the New Century Excellent Talents in University from the Ministry of Education of China in 2012.
Oct. 06
Learning Binary Hash Functions: Optimisation- and Ensemble-based Approaches, Ramin Raziperchikolaei, UC Merced
Abstract
An attractive approach for fast search in image databases is binary hashing, where a hash function maps each high-dimensional, real-valued image onto a low-dimensional, binary vector. The search for similar images is done in the binary space, which is much faster because of using hardware operations to compute the Hamming distances. But, the binary hashing introduces error, which means that images that were originally similar in the real space may not be similar anymore in the binary space. The main goal of the binary hashing is to reduce this error as much as possible by learning hash functions that map dis/similar images onto dis/similar binary codes. In this talk, I will describe our work on finding better ways to learn hash functions. In the first part of my talk, I will focus on the optimisation-based approaches. In this approach, a complicated objective function is defined over the parameters of the hash function. Optimising this nonconvex and nonsmooth objective function is difficult because the output of the hash function is binary. The previous hashing papers ignore the binary constraints and use ad-hoc methods in solving the problem. In our work, we use the "method of auxiliary coordinates (MAC)" to optimise the objective function correctly, by preserving the binary constraints and learning the binary codes and the hash functions jointly. This better optimisation leads to learning better hash functions, which perform more accurately in the nearest neighbors search. The main difficulty of the optimisation-based approach is that all the single-bit hash functions are coupled inside the objective function, which makes the optimisation slow. In the second part of my presentation, I will talk about our proposed ensemble-based approach, which overcomes the main difficulty of the previous approach. The idea is to learn the single-bit hash functions independently and combine them to achieve the final hash function. We use the ensemble-based techniques to make sure that the hash functions are different. This approach gives us several advantages like simpler optimisation problems, massive parallelization, and better performance in image retrieval. Finally, we show that the diversity-based approaches can get even simpler by guessing the single-bit binary codes of the images.
Biography
Ramin Raziperchikolaei is a PhD candidate in the EECS Department of UC Merced. He received his BS in Computer Engineering in 2010 from Iran University of Science and Technology and MS in Artificial Intelligence in 2012 form Sharif Univeristy of Technology, Iran. Since 2013, he has been working towards his PhD degree in machine learning. His research has been focused on learning binary hash functions for fast image retrieval problems.
Oct. 13
Computational Social Science, Professor Alex Petersen, UC Merced
Abstract
The timely combination of accessible computational tools and data availability have led to advances across a wide range of scientific domains in the digital era. In this way, computational tools & methods represent a scientific ‘commons’ that brings together researchers from different disciplines, facilitating interdisciplinary endeavors and cross-disciplinary career paths. As a result, several new researcher communities have sprouted, (e.g. Quantitative Social Science, Computational Social Science, Data Science, Digital Humanities, etc.) which all occupy and leverage this commons. In this talk I will discuss the ‘data science’ pipeline, which includes identifying data sources; accessing or “scraping” raw data; cleaning, organizing, merging and identifying potential pitfalls in the data; exploring and visualizing the underlying statistics; and finally modeling the data in the context of relevant research questions. I will provide examples of computationally-driven social science from my own research, pertaining to the Science of Science & Innovation, as well as an example of the data science pipeline using the Zillow API that produces longitudinal cross-sectional data on housing prices in several local cities to UC Merced.
Biography
Dr. Petersen is an assistant professor in the Management of Complex Ststems unit at UC Merced. His research combines perspectives and methods from statistical physics, network science, computational social science, and economometrics, in order to model science and innovation processes that occur across multiple scales: from individual publications and careers to national innovation systems.
Oct. 20
Optimizing Memory Efficiency for Deep Neural Networks on GPUs, Dr. Chao Li, Qualcomm Research
Abstract
Deep Neural Nets such as Deep Convolutional Networks have achieved state-of-the-art results in various computer vision tasks. Leveraging large training data sets, deep Convolutional Neural Networks (CNNs) evovles to be a deep multi-layer computational structure for high recognition accuracy. Due to the substantial compute and memory operations, however, they require significant execution time. The massive parallel computing capability of GPUs make them as one of the ideal platforms to accelerate CNNs and a number of GPU-based CNN libraries have been developed. While existing works mainly focus on the computational efficiency of CNNs, the memory efficiency of CNNs have been largely overlooked. Yet CNNs have intricate data structures and their memory behavior can have significant impact on the performance. In this talk, I will present our study on optimizing the memory efficiency for DNNs on GPUs. Specifically, we study the memory efficiency of various CNN layers and reveal the performance implication from both data layouts and memory access patterns. Experiments show the universal effect of our proposed optimizations on both single layers and various networks, with up to 27.9x for a single layer and up to 5.6x on the whole networks.
Biography
Chao Li is currently Senior System Engineer (Researcher) in Qualcomm GPU Research Team. His working area lies in computer architecture and programming, especially on exploring new performance features for next-generation GPU systems. He obtained the PhD degree in Computer Engineering at North Carolina State University in 2016. His works have been published in top computer system conferences such as SC, ICS, CGO, PPoPP, ISPASS, etc. He received the ACM/IEEE Supercomputing Conference Best Student Paper Finalist Award in 2016.
Oct. 27
Say Hello to Waymo, Dr. Ioan Sucan Waymo
Abstract
DDriving is integral part of our lives. We do it for fun, but it is more often a necessity. Unfortunately, it is not always safe, and it almost always takes more time than we'd like. Worldwide, 1.2M people die annually on our roadways. In the US alone, we kill 35,000 people a year, the equivalent of a 737 falling out of the sky every working day of the entire year. The vast majority of these accidents involve human error. This makes self-driving technology an enticing promise to greatly improve safety on the road. This talk will provide an overview of Waymo's self-driving technology, with a focus on safety considerations.
Biography
CIoan A. Șucan is currently a Research Software Engineer at Waymo (formerly part of X / Google[x]), working on motion planning for self-driving cars. Before joining Waymo, Dr. Șucan was a Research Scientist at Willow Garage, where he worked on a number of open-source software projects. Dr. Șucan's most well known contributions are MoveIt!, a motion planning and manipulation framework, and the Open Motion Planning Library (OMPL), a software library of sampling-based motion planning algorithms. Dr. Șucan received the Ph.D. and M.S. degrees in computer science from Rice University, Houston TX, in 2011 and 2008, respectively, under the supervision of Prof. Lydia Kavraki. He received the B.S. degree in electrical engineering and computer science from Jacobs University, Bremen, Germany, in 2006.
Nov. 3
Towards Accelerator-Rich Architectures and Systems, Dr. Zhenman Fang Xilinx
Abstract
With Intel's $16.7B acquisition of Altera and the deployment of FPGAs in major cloud service providers including Microsoft and Amazon, we are entering a new era of customized computing. In future architectures and systems, it is anticipated that there will be a sea of heterogeneous accelerators customized for important application domains, such as machine learning and personalized healthcare, to provide better performance and energy-efficiency. Many research problems are still open, such as how to efficiently integrate accelerators into future chips and commodity datacenters, and how to program such accelerator-rich architectures and systems.
In this talk, I will first give a quick overview of my research on accelerator-rich architectures and systems, which spans from application drivers to underlying computer architectures. Then I will present our recent work on CPU-accelerator co-design, where we provide efficient and unified address translation support between CPU cores and accelerators [HPCA 2017 Best Paper Nominee]. It shows that a simple two-level TLB design for accelerators plus the host core MMU for accelerator page walking can be very efficient. On average, it achieves 7.6x speedup over the naïve IOMMU and there is only 6.4% performance gap to the ideal address translation. Third, I will present the concept of accelerators-as-a-service in cloud deployment and introduce our open-source Blaze prototype system [ACM SOCC 2016]. Blaze provides programming and runtime support to enable easy and efficient FPGA accelerator integration into state-of-the-art big data framework Apache Spark. By deploying a PCIe-based FPGA board into each CPU server using Blaze, it can consolidate the cluster size by several folds while providing the same system throughput. Finally, I will talk about some future research that will enhance architecture, programming, compiler, runtime, and security support to accelerator-rich architectures and systems.
Biography
Dr. Zhenman Fang has been a postdoc in UCLA Computer Science Department since July 2014, and recently moved to Xilinx San Jose in mid Sept. During his postdoc, Zhenman worked with Prof. Jason Cong and Prof. Glenn Reinman, and was also a member of two multi-university centers: Center for Domain-Specific Computing (CDSC) and Center for Future Architectures Research (C-FAR). Zhenman received his PhD in June 2014 from Fudan University, China and spent the last 15 months of his PhD program visiting University of Minnesota at Twin Cities.
Zhenman's research lies at the boundary of heterogeneous and energy-efficient computer architectures, big data workloads and systems, and system-level design automation. He has published 10+ papers in top venues that span across computer architecture (HPCA, TACO, ICS), design automation (DAC, ICCAD, FCCM, IEEE Design & Test), and cloud computing (ACM SOCC). Moreover, he also actively serves on the organizing committee and program committee of top conferences including HPCA 2017, ICCD 2017, IISWC 2017, DATE 2018, and ICS 2018. Finally, he has received several awards, including a best paper nominee of HPCA 2017, a best paper award of MEMSYS 2017, a postdoc fellowship from UCLA, a best demo award at the C-FAR center annual review. More details can be found in his personal website: https://sites.google.com/site/fangzhenman/.
Nov. 17
Sparse Representation of Agent States in Reinforcement Learning, Jacob Rafati, UC Merced
Dec. 1
Value Alignment in Artificial Intelligence, Dylan Hadfield-Menell, UC Berkeley
Abstract
I will give an overview of some of the recent work we have been pursuing in formalizing, understanding and solving 'the value alignment problem.' Loosely speaking, this is the problem of ensuring that an AI system's behavior aligns with its designer or users intended objective. This is closely related to the well-studied principal-agent problem from economics, where a firm needs to align an employee's incentives with the firm's ultimate goal. I will present Cooperative Inverse Reinforcement Learning, our initial attempt to mathematically formalize the value alignment problem and discuss the implications of our framework for human robot interaction and robust AI design. .
Biography
I'm a fifth year Ph.D. student at UC Berkeley, advised by Anca Dragan, Pieter Abbeel, and Stuart Russell. My research focuses on the value alignment problem in artificial intelligence. My goal is to design algorithms that learn about and pursue the intended goal of their users, designers, and society in general. My recent work has focused on algorithms for human-robot interaction with unknown preferences and reliability engineering for learning systems.
I'm also interested in work that bridges the gap between AI theory and practical robotics and work on the problem of integrated task and motion planning. Before coming to Berkeley, I did a Master's of Engineering with Leslie Kaelbling and Tomás Lozano-Pérez at MIT. When I'm not working on research I'm usually wrapped up in a Sci-Fi or Fantasy novel, playing ultimate frisbee, or skiing. .
Dec. 8
AuCloud: the Cloud for the Transportation Industry, Dr. Carlos Garcia-Alvarado, Autonomic, Inc.
Abstract
Increased vehicle connectivity and the “pay-as-you-go” business model are revolutionizing the transportation industry. It is projected that in less than a decade all new passenger cars will be connected, resulting in new market opportunities, safety enhancing technologies, and customer behaviors. Autonomic is shaping the ongoing disruption in transportation with the Autonomic Transportation Cloud. This platform is a set of services designed to empower auto manufacturers to create fleet and retail mobility applications. In addition, our AuCloud also supports Transportation-as-a-Service solutions similar to ones offered for vehicle hailing and matching. Our engineering team has been confronted with a myriad of technical challenges, such as manufacturer-specific telemetry device protocols, ingesting and processing massive amounts of data streams, out of order data, late data arrival, application scalability and modularity, data curation and normalization, APIs for rapid development, data analytics on continuous and historic data, data security, and DevOps, among others. This talk will focus on discussing some of these challenges, and on our Microservice-based architecture that supports scalable, reliable and high performance operations in order to solve them. To conclude, we will talk about the future of our industry and how Autonomic is getting ready to “drive it.”
Biography
Carlos Garcia-Alvarado is a Senior Software Engineer at Autonomic, Inc., where he works with streaming data in the area of vehicle transportation. Previously, he has worked at Amazon Web Services as a Senior Software Development Manager in Amazon Redshift and at Pivotal Software as a Principal Product Manager and Staff Engineer supporting the development of HAWQ and Greenplum Data Warehouses. Carlos has remained active in the database systems community by co-chairing the DOLAP Workshop (2015), published more than 30 peer-reviewed proceedings or articles, and issued several patents. He holds master’s and doctoral degrees in Computer Science from the University of Houston and a master’s degree in Industrial Engineering from the Instituto Tecnologico de Estudios Superiores de Monterrey.
__
Spring 2017
Jan. 20
Advanced Database Techniques for Scientific Data Processing, Weijie Zhao, UC Merced
Abstract
Scientific applications are generating an ever-increasing volume of multi-dimensional data that are largely processed inside distributed array databases and frameworks. Similarity join is a fundamental operation across scientific workloads that requires complex processing over an unbounded number of pairs of multi-dimensional points. In this talk, we introduce a novel distributed similarity join operator for multi-dimensional arrays. Unlike immediate extensions to array join and relational similarity join, the proposed operator minimizes the overall data transfer and network congestion while providing load-balancing, without completely repartitioning and replicating the input arrays. We define formally array similarity join and present the design, optimization strategies, and evaluation of the first array similarity join operator. Meanwhile, if the data are rapidly updated, the join result can be considered as a view, which is defined by the similarity join. We model the process as incremental view maintenance with batch updates and give a three-stage heuristic that finds effective update plans. Moreover, the heuristic repartitions the array and the view continuously based on a window of past updates as a side-effect of view maintenance. We design an analytical cost model for integrating materialized array views in queries. A thorough experimental evaluation confirms that the proposed techniques are able to incrementally maintain a real astronomical data product in a production pipeline.
Biography
Weijie Zhao is a PhD student in the EECS graduate group at UC Merced, working with Prof. Florin Rusu. He received his BS from East China Normal University in Shanghai. His research interests include databases and scientific data management. Weijie is an avid computer programming contestant.
Jan. 27
Image Editing and Learning Filters for Low-level Vision, Yi-Hsuan Tsai and Sifei Liu, UC Merced
Abstract
In the first part of this talk, we present a sematic-aware image editing algorithm for automatic sky replacement. The key idea of our algorithm is to utilize visual semantics to guide the entire process including sky segmentation, search and replacement. First, we train a deep convolutional neural network for semantic scene parsing, which is used as visual prior to segment sky regions in a coarse-to-fine manner. Second, in order to find proper skies for replacement, we propose a data-driven scheme based on semantic layout of the input image. Finally, to re-compose the stylized sky with the original foreground naturally, an appearance transfer method is developed to match statistics locally and semantically. We show that the proposed algorithm can automatically generate a set of visually pleasing and realistic results. In the second part, a work on learning image filters for low-level vision is presented (e.g., edge-preserving filtering and denoising), in which a unified hybrid neural network is proposed. The network contains several spatially variant recurrent neural networks (RNN) as equivalents of a group of distinct recursive filters for each pixel, and a deep convolutional neural network (CNN) that learns the weights of RNNs. The proposed model does not need a large number of convolutional channels nor big kernels to learn features for low-level vision filters. Experimental results show that many low-level vision tasks can be effectively learned and carried out in real-time by the proposed algorithm.
Biography
Yi-Hsuan Tsai (https://sites.google.com/site/yihsuantsai/) received the B.S. in Electronics Engineering from National Chiao-Tung University, Hsinchu, Taiwan and the M.S. in Electrical Engineering and Computer Science from University of Michigan, Ann Arbor. He is currently working toward the PhD advised by Prof. Ming-Hsuan Yang at UC Merced and is the recipient of the Graduate Dean's Dissertation Fellowship in 2016. He was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. His research interests include computer vision, computational photography and machine learning with the focus on visual object recognition and image editing. He also did research internships at Qualcomm Research, Max Planck Institute and Adobe Research.
Sifei Liu (http://www.sifeiliu.net/) is a Ph.D candidate in Electrical Engineering and Computer Science with Prof. Ming-Hsuan Yang. Her research interests are in computer vision and machine learning. She completed her M.C.S. at University of Science and Technology of China (USTC) under Stan.Z Li and Bin Li, and received the B.S. in control science and technology from North China Electric Power University. She received the Baidu fellowship in 2013. In 2013 and 2014, she was a intern at Baidu Deep Learning Institute. In addition, she was a visiting student at the Chinese University of Hong Kong in 2015. She was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. Her research interests include computer vision, machine learning and computational photography.
Feb. 3
It's All about Cache, Ming Zhao, Arizona State University
Abstract
This talk is about cache, and more specifically, solid-state storage based cache for large-scale computing systems such as cloud computing and big data systems. With the increasing workload data intensity and increasing level of consolidation in such systems, storage is becoming a serous bottleneck. Emerging solid-state storage devices such as flash memory and 3D Xpoint have the potential to address this scalability issue by providing a new caching layer between main memory and hard drives in the storage hierarchy. However, solid-state storage has limited capacity and endurance, and needs to be managed carefully when used for caching. This talk will present several recent works done by the ASU VISA Research Lab for addressing these limitations and making effective use of solid-state caching.
First, the talk will introduce CloudCache, an on-demand cache allocation solution for understanding the cache demands of workloads and allocating the shared cache capacity efficiently. It is able to reduce a workload’s cache usage by 78% and the amount of writes sent to cache device by 40%, compared to traditional working-set-based approach. Second, the talk will present CacheDedup, an in-line cache deduplication solution that integrates caching and deduplication with duplication-aware cache replacement to improve the performance and endurance of solid-state caches. It can reduce a workload’s I/O latency by 51% and the amount of writes sent to cache device by 89%, compared to traditional cache management approaches. Finally, the talk will be concluded with a brief overview of the systems research at the VISA lab.
Biography
Ming Zhao is an associate professor of the Arizona State University (ASU) School of Computing, Informatics, and Decision Systems Engineering (CIDSE), where he directs the research laboratory for Virtualized Infrastructures, Systems, and Applications (VISA, http://visa.lab.asu.edu). His research is in the areas of experimental computer systems, including distributed/cloud, big-data, and high-performance systems as well as operating systems and storage in general. He is also interested in the interdisciplinary studies that bridge computer systems research with other domains. His work has been funded by the National Science Foundation (NSF), Department of Homeland Security, Department of Defense, Department of Energy, and industry companies, and his research outcomes have been adopted by several production systems in industry. Dr. Zhao has received the NSF Faculty Early Career Development (CAREER) award, the Air Force Summer Faculty Fellowship, the VMware Faculty Award, and the Best Paper Award of the IEEE International Conference on Autonomic Computing. He received his bachelor’s and master’s degrees from Tsinghua University, and his PhD from University of Florida.
Feb. 10
Visual Understanding: Face Parsing and Video Object Segmentation, Sifei Liu and Yi-Hsuan Tsai, UC Merced
Abstract
In the first part of this talk, we present a work for face parsing via conditional random field with unary and pairwise classifiers. We develop a novel multi-objective learning method that optimizes a single unified deep convolutional network with two distinct non-structured loss functions: one encoding the unary label likelihoods and the other encoding the pairwise label dependencies. Moreover, we regularize the network by using a nonparametric prior as new input channels in addition to the RGB image, and show that significant performance improvements can be achieved with a much smaller network size. Experiments show state-of-the-art and accurate labeling results on challenging images for real-world applications. In the second part, a work on video object segmentation is presented. It is a challenging problem due to fast moving objects, deformed shapes, and cluttered backgrounds. To obtain accurate segmentation across time, we propose an efficient algorithm that considers video segmentation and optical flow estimation simultaneously. For video segmentation, we formulate a principled, multi-scale, spatio-temporal objective function that uses optical flow to propagate information between frames. For optical flow estimation, particularly at object boundaries, we compute the flow independently in the segmented regions and recompose the results. We call the process "object flow" and demonstrate the effectiveness of jointly optimizing optical flow and video segmentation using an iterative scheme.
Biography
Sifei Liu (http://www.sifeiliu.net/) is a Ph.D candidate in Electrical Engineering and Computer Science with Prof. Ming-Hsuan Yang. Her research interests are in computer vision and machine learning. She completed her M.C.S. at University of Science and Technology of China (USTC) under Stan.Z Li and Bin Li, and received the B.S. in control science and technology from North China Electric Power University. She received the Baidu fellowship in 2013. In 2013 and 2014, she was a intern at Baidu Deep Learning Institute. In addition, she was a visiting student at the Chinese University of Hong Kong in 2015. She was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. Her research interests include computer vision, machine learning and computational photography.
Yi-Hsuan Tsai (https://sites.google.com/site/yihsuantsai/) received the B.S. in Electronics Engineering from National Chiao-Tung University, Hsinchu, Taiwan and the M.S. in Electrical Engineering and Computer Science from University of Michigan, Ann Arbor. He is currently working toward the PhD advised by Prof. Ming-Hsuan Yang at UC Merced and is the recipient of the Graduate Dean's Dissertation Fellowship in 2016. He was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. His research interests include computer vision, computational photography and machine learning with the focus on visual object recognition and image editing. He also did research internships at Qualcomm Research, Max Planck Institute and Adobe Research.
Feb. 17
Interactive Visual Computing for Knowledge Discovery in Science, Engineering, and Training, Jian Chen, University of Maryland, Baltimore County
Abstract
Imagine computer displays become a space to augment human thinking. Essential human activities such as seeing, gesturing, and exploring can couple with powerful computational solutions using natural interfaces and accurate visualizations. In this talk, I will present research effort to quantify visualization techniques of all kinds. Our ongoing work includes research in: (1) perceptually accurate visualization – constructing a visualization language to study how to depict spatially complex fields in quantum-physics simulations and brain-imaging datasets; (2) using space to compensate for limited human memory – developing new computing and interactive capabilities for bat-flight motion analysis in a new metaphorical interface; and (3) extending exploratory metaphors to biological pathways to make possible integrated analysis of multifaceted datasets. During the talk, I will point to a number of other projects being carried out by my team. I will close with some thoughts on automating the evaluation of visualizations and venture that a science of visualization and metaphors now has the potential to be developed in full, and that its success will be crucial in understanding data-to-knowledge techniques in traditional desktop and immersive settings.
Biography
Jian Chen is an Assistant Professor in the Department of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC), where she leads the Interactive Visual Computing Lab (http:// ivcl.umbc.edu) and UMBC’s Immersive Hybrid Reality Lab (http://tinyurl.com/ ztnvdmf). She maintains general research interests in the design and evaluation of visualizations (encoding of spatially complex brain imaging, integrating spatial and non-spatial data, perceptually accurate visualization, and event analysis) and interaction (exploring large biological pathways, immersive modeling, embodiment, and gesture input). She has garnered best-paper awards at international conferences, and her work is funded by NSF, NIST, and DoD. She is also an UMBC innovation fellow and a co-chair of the first international workshop on the emerging field of Immersive Analytics. Chen did her post-doctoral research at Brown University jointly with the Departments of Computer Science (with Dr. David H. Laidlaw) and Ecology and Evolutionary Biology. She received her Ph.D. in Computer Science from Virginia Tech with Dr. Doug A. Bowman. To learn about Jian Chen and her work, please visit http:// www.csee.umbc.edu/~jichen.
Feb. 24
Situated Intelligent Interactive Systems, Zhou Yu, Carnegie Mellon University
Abstract
Communication is an intricate dance, an ensemble of coordinated individual actions. Imagine a future where machines interact with us like humans, waking us up in the morning, navigating us to work, or discussing our daily schedules in a coordinated and natural manner. Current interactive systems being developed by Apple, Google, Microsoft, and Amazon attempt to reach this goal by combining a large set of single-task systems. But products like Siri, Google Now, Cortana and Echo still follow pre-specified agendas that cannot transition between tasks smoothly and track and adapt to different users naturally. My research draws on recent developments in speech and natural language processing, human-computer interaction, and machine learning to work towards the goal of developing situated intelligent interactive systems. These systems can coordinate with users to achieve effective and natural interactions. I have successfully applied the proposed concepts to various tasks, such as social conversation, job interview training and movie promotion. My team's proposal on engaging social conversation systems was selected to receive $100,000 from Amazon Inc. to compete in the Amazon Alexa Prize Challenge.
Biography
Zhou Yu is a graduating PhD student at the Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Prof. Alan W Black and Prof. Alexander I. Rudnicky. She interned with Prof. David Suendermann-Oeft in ETS San Francisco Office on cloud based mulitmodal dialog systems in 2015 summer and 2016 summer. She also interned with Dan Bohus and Eric Horvitz in Microsoft Research on human-robot interaction in 2014 Fall. Prior to CMU, she received a B.S. in Computer Science and a B.A. in Linguistics from Zhejiang University in 2011. She worked with Prof. Xiaofei He and Prof. Deng Cai on Machine Learning and Computer Vision, and Prof. Yunhua Qu on Machine Translation.
March 3
Moving Towards Customizable Autonomous Driving, Chandrayee Basu, UC Merced
Abstract
In this talk, I will first present the results of my first research project on autonomous driving as a PhD student in UC Merced. This work was conducted in collaboration with Berkeley Deep Drive (http://bdd.berkeley.edu/project/implicit-communication-through-motion). With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. From the days of Alvinn to the latest autonomous driving technologies like NVidia’s Drive PX, researchers have used Learning from Demonstration to teach autonomous cars how to drive. Therefore, when it comes to customization of autonomous driving, a common answer is that the car should adopt the user’s style. In this project, we questioned this assumption and conducted user research in driving simulator to test our hypothesis. We found that users tend to prefer a significantly more defensive driving style than their own. Interestingly, they prefer the style they think is their own, even though their actual driving style tends to be more aggressive. The results show that conventional Learning from Demonstration algorithms will be inadequate for personalizing autonomous driving. In the second part of the talk I will discuss the implications of this result in greater detail and present some of the potential algorithms that we can use to augment user demonstrations.
Biography
Chandrayee Basu (http://chandrayee-basu.squarespace.com) is a 2nd year Ph.D. student in UC Merced advised by Prof. Mukesh Singhal, UC Merced and Prof. Anca Dragan, UC Berkeley. She is applying Human-Robot Interaction Algorithms to integrate human interaction into motion planning of autonomous cars. Chandrayee has multi-disciplinary research experience in design, applied machine learning, smart environment and human-robot interaction that she acquired as a Graduate student at UC Berkeley and Carnegie Mellon University.
March 10
Inventing in the Research Lab vs Startups, David Merrill, Lemnos Labs Inc.
This talk is part of the EECS | CITRIS Frontiers in Technology Series
Abstract
In this talk I will compare and contrast research vs startup innovation, based on my experiences at Stanford, the MIT Media Lab and Bay Area startups. I’ll discuss how the desired outcomes of each context encourages different kinds of risk and exploration, takeaways from my research experiences, and how we structure the early ideation process at Lemnos Labs where I am an Entrepreneur In Residence.
Biography
David Merrill is a technology executive and hardware startup founder with a computer science and human computer interaction background. His tactile learning system startup Sifteo - based on his Ph.D. from MIT - was acquired by drone-maker 3D Robotics in 2014 to become the kernel of a new consumer product group. At 3D Robotics he took various roles on the team that launched Solo: the Smart Drone in 2015, then he led R&D and IP. Alumnus of MIT, Stanford Computer Science and Symbolic Systems. TED speaker. Human-computer interaction expert. Drone builder. His work has been featured by the Discovery Channel, Popular Science, Wired, and the New York Times. Merrill is currently Entrepreneur in Residence at Lemnos Labs, an early-stage VC firm in San Francisco, working on the next project.
March 17
Data-Based Full-Body Motion Coordination and Planning, Alain Juarez-Perez, UC Merced
Abstract
In this talk I will present new approaches for achieving full-body motion coordination for humanoid virtual characters. I will first present a parametric data-based mobility controller with known coverage and validity characteristics, achieving flexible real-time deformations for locomotion control. I will then present a method for switching between different types of locomotion in order to navigate cluttered environments. The proposed method incorporates the locomotion capabilities of the character in the path planning stage, producing paths that address the trade-off between path length and locomotion behavior choice for handling narrow passages. In the last part, I will introduce a new approach for the coordination of locomotion with manipulation. The approach is based on a coordination model built from motion capture data in order to represent proximity relationships between the action and the environment. The result is a real-time controller that can autonomously produce environment-dependent full-body motion strategies. The obtained coordination model is successfully applied on top of generic walking controllers, achieving controllable characters which are able to perform complete full-body interactions with the environment.
Short Bio
Alain Juarez-Perez is a Ph.D. candidate in the Electrical Engineering and Computer Science graduate group of the University of California, Merced. His work is being developed at the Computer Graphics Lab under the advice of Prof. Marcelo Kallmann and has been supported by a UC-Mexus Doctoral Fellowship. He received his B.S. in Computer Science in 2012 from the University of Guanajuato, and in 2014 he was a visiting research assistant at the USC Institute for Creative Technologies. His research interests include Computer Animation, Data-Driven Algorithms, Motion Capture, Computational Geometry, Machine Learning, Computer Graphics and Motion Planning.
March 24
Securing Internet of Things, Chen Qian, UC Santa Cruz
Abstract
In this talk, I will introduce my recent research projects on Internet of Things (IoT) security. First, I will introduce a physical layer authentication method for RFID tags. Second, I will talk about a fast and reliable protocol for authentication and key agreement among multiple IoT devices based on wireless signal information. Third, I will introduce an IoT data communication framework that guarantees data authenticity and integrity.
Biography
Chen Qian is an Assistant Professor in Department of Computer Engineering at University of California Santa Cruz. He was on the faculty of Computer Science at University of Kentucky 2013-2016. He got his PhD degree from The University of Texas at Austin, where he worked with Simon Lam. His research interests include computer networking and distributed systems, Internet of Things, network security, and cloud computing. He has published more than 60 papers, most which appeared in top journals and conferences including ToN, TPDS, ICNP, INFOCOM, ICDCS, SIGMETRICS, CoNEXT, CCS, NDSS, Ubicomp, etc.
April 7
Stochastic distribution control and its applications Hong Wang, PNNL
Abstract
This seminar presents a brief and selected survey on the advances on stochastic distribution control, where the purpose of the controller design is to control the shape of output probability density functions (pdf) of non-Gaussian and general stochastic systems. This research was motivated through the requirement of distribution shape control of a number of practical systems. In recent years much research has been performed internationally and journal special issues and invited session at major conferences have been seen since 2001. It is expected that this seminar will provide with some up-to-date information on this new area.
Biography
Dr. Hong Wang joined PNNL in February 2016 as a Laboratory Fellow. He is based in the Controls team within the Electricity Infrastructure and Buildings Division of the Energy and Environment Directorate. Prior to joining PNNL, he was a full (chair) professor in process control at the University of Manchester in the U.K. Dr. Wang 's research interests are in advanced modelling and control for complex industrial processes, and fault diagnosis and tolerant control. He originated the research on stochastic distribution control where the main purpose of control input design is to make the shape of the output probability density functions to follow a targeted function. This area alone has found a wide spectrum of potential applications in modelling, data-mining, signal processing, optimization and distributed control systems design. Dr. Wang is the lead author of five books and has published over 300 papers in international journals and conferences. He is a member of three International Federation of Automatic Control Technical Committees and associate editor for IEEE Transactions on Control Systems Technology, IEEE Transactions on Automation Science and Engineering, and seven other international journals. He has been the associate editor of IEEE Transactions on Automatic Control, and has served as IPC member and conference chairman for many international conferences. Dr. Wang has also received several best paper awards from International Conferences including the best paper awards at Int. Conf. Control 2006, the Jasbar Memorial Prize for his outstanding contribution in the Science and Technology Development for paper industries in 2006, best theory paper award at World Congress on Intelligent Control and Automation in 2014 and one of the five finalists for the best application paper prize at 2014 IFAC World Congress. Dr. Wang holds a Ph.D. degree from Huazhong University of Science and Technology (HUST) in P R China.
April 14
This seminar is being cancelled.
Mark Palatucci Anki.com
This talk is part of the EECS | CITRIS Frontiers in Technology Series
Abstract
Text here.
https://express.travel.ucla.edu/inbox/portal/inbox/mydocuments.jsf
Biography
Text here.
April 21
Bridging the Gap in Grasp Quality Evaluation, Shuo Liu, UC Merced
Abstract
Robot grasp planning has been extensively studied in the last decades and it often consists of two different stages, i.e., where to grasp an object and how to measure the quality of a tentative grasp. Because these two processes are computationally demanding, form closure grasps are more widely used in practice instead of force closure grasps, even though the latter is in many cases preferable. In this talk, we introduce our framework to improve grasp quality evaluation. We accelerate the computation of the grasp wrench space, used to measure the grasp quality, by exploiting some geometric insights in the computation of convex hull. In particular, we identify a cutoff sequence to terminate the convex hull calculation with guaranteed convergence to the quality measure. Furthermore, we study how noise at each joint of the manipulator affects grasp quality. Different arm configurations will generate different noise distributions at the end-effector which have a huge impact in the robustness of grasping. In the last part of the talk, I will introduce a grasp planner taking into account the local geometry of the object to be grasped. In particular, for concave objects we exploit the fact that grasping at the concave region can make the grasp more robust. These insights are studied in theory and validated on an experimental platform.
Biography
Shuo Liu received his B.Sc. degree in computer engineer from University of Minnesota in 2012 and his B.Sc. degree in mathematics from Beijing Jiaotong University in 2012. In his Junior year of college, he participated in 2011 Robocup and won the champion title in the Middle Size League. He also participated in IROS 2016 Grasping and Manipulation Challenge and won 2nd place in automation track. Since August 2012 he is pursuing a Ph.D. degree in electrical engineering and computer science at the University of California, Merced, working with Dr.Stefano Carpin. His interests include manipulation, grasping, and computational geometry.
April 28
Multicopter dynamics and control: surviving the complete loss of multiple actuators and rapidly generating trajectories, Mark Mueller, Mechanical Engineering Dept., UC Berkeley
Abstract
Flying robots, such as multicopters, are increasingly becoming part of our everyday lives, with current and future applications including personal transportation, delivery services, entertainment, and aerial sensing. These systems are expected to be safe and to have a high degree of autonomy. This talk will discuss the dynamics and control of multicopters, including some research results on trajectory generation for multicopters and fail-safe algorithms. Finally, we will present the application of a failsafe algorithm to a fleet of drones performing as part of a live theatre performance on New York's Broadway.
Biography
Mark W. Mueller joined the mechanical engineering department at UC Berkeley in September 2016. He completed his PhD studies, advised by Prof. Raffaello D'Andrea, at the Institute for Dynamic Systems and Control at the ETH Zurich at the end of 2015. He received a bachelors degree from the University of Pretoria, and a masters from the ETH Zurich in 2011, both in Mechanical Engineering. http://www.me.berkeley.edu/people/faculty/mark-mueller
May 5
Robots for the Real World, James Gosling, Liquid Robotics
This talk is part of the EECS | CITRIS Frontiers in Technology Series
Abstract
Coping with hostile environments, the inevitability of failure, and being truly alone.
Biography
James Gosling received a BSc in Computer Science from the University of Calgary, Canada in 1977. He received a PhD in Computer Science from Carnegie-Mellon University in 1983. The title of his thesis was "The Algebraic Manipulation of Constraints". He spent many years as a VP & Fellow at Sun Microsystems. He has built satellite data acquisition systems, a multiprocessor version of Unix, several compilers, mail systems and window managers. He has also built a WYSIWYG text editor, a constraint based drawing editor and a text editor called `Emacs' for Unix systems. At Sun his early activity was as lead engineer of the NeWS window system. He did the original design of the Java programming language and implemented its original compiler and virtual machine. He has been a contributor to the Real-Time Specification for Java, and a researcher at Sun labs where his primary interest was software development tools. He then was the Chief Technology Officer of Sun's Developer Products Group and the CTO of Sun's Client Software Group. He briefly worked for Oracle after the acquisition of Sun. After a year off, he spent some time at Google and is now the chief software architect at Liquid Robotics where he spends his time writing software for the Waveglider, an autonomous ocean-going robot.
__
Fall 2016
Aug. 26
Combining Virtual Reality, Psychology, Theater and Learning Sciences for Training and Assessment, Arjun Nagendran, University of Central Florida & Mursion
Abstract
Technological advancements over the last decade have opened up a plethora of possibilities for ``blue-skies'' research. Today, we live in a world where multi-disciplinary teams collaborate to create novel platforms that enhance every aspect of our life. We now have the foundations to inject our cross-disciplinary ideas across traditional fields of study. Identifying voids and applying our expertise across domains results in powerful products that can have a significant impact in society. This talk will be centered around an application that leverages the ever-diminishing boundaries between virtual reality, psychology and the learning sciences. The concepts of ``Avatars'' and ``Inhabiting'' will be introduced, following which real-world applications of their use will be demonstrated. In particular, the talk will focus on how human-assisted virtual avatars can be used for the purposes of training and assessment across several fields such as healthcare, counselling, hospitality and education. The effectiveness of these systems in riding the upcoming wave of Virtual Reality devices including the Oculus RIFT, Gear VR and the Microsoft HoloLens will be discussed. The talk will conclude with potential futuristic applications of the concept of ``inhabiting''.
Biography
Arjun Nagendran is the Co-Founder and Chief Technology Officer at Mursion, Inc., a San-Francisco-based startup specializing in the application of virtual reality technology for the purposes of training and assessment. He completed his Ph.D. in Robotics from the University of Manchester, UK, specializing in landing mechanisms for Unmanned Air Vehicles. Prior to Mursion, he worked for several years as an academic researcher including leading the ground vehicle systems for Team Tumbleweed, which was one of the six-finalists at the Ministry of Defense (UK) Grand Challenge. Arjun's research interests include coupling psychology and learning sciences with technological advancements in remote-operation, virtual reality, and control system theory to create high-impact applications. During his academic career, he has served as a committee member and reviewer for several conferences and journals including the International Conference on Intelligent Robots and Systems (IROS) and IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
Sept. 2
A Parallel Sorting Algorithm for 130K CPU Cores, Bin Dong, Lawrence Berkeley National Lab
Abstract
Parallel sorting is a fundamental algorithm in computer science and it becomes apparently more important in the big data era. Utilizing supercomputers to perform sorting is attractive since their large amount of CPU cores have the potential power to sort terabyte or even exabyte data per minute. However, developing a parallel sorting algorithm that is efficient and scalable for a supercomputer is a challenge because of the load imbalance caused by the data skewness and also the complex communication patterns caused by multi-core architectures. In this talk, I present our experience in developing and scaling a parallel sorting algorithm named SDS-Sort on a 2.57 petaflops/sec supercomputer.
Biography
Bin Dong is currently a research scientist in the Scientific Data Management group at LBNL. His research interests are in scalable scientific data management, parallel storage systems, and parallel computing. More specifically, he is exploring new algorithms and data structures for storing, organizing, sorting, indexing, searching, and analyzing big scientific data (mostly multi-dimensional arrays) with supercomputers. Bin Dong earned his Ph.D. in Computer Science and Technology from Beihang University, China in 2013. Then, he joined the Scientific Data Management group at LBNL as a postdoc until 2016.
Sept. 9
Robot Motion Planning Considering Multiple Costs and Multiple Task Specifications, Shams Feyzabadi, UC Merced
Abstract
With the recent dramatic growth in robotic commercial applications in all fields, expectations from robotic systems have escalated as well. For example, robots are tasked with increasingly more complex missions featuring multiple costs that should be accounted for. In addition, with robots operating for extended periods of times in unstructured environments, it is often convenient to task the robot with multiple objectives at once and let the system determine a control strategy jointly considering all of them. In this talk we propose a planner to solve sequential stochastic decision making problems where robots are subject to multiple cost functions and are tasked to complete more than one goal specified using a subset of linear temporal logic operators. Each subgoal is associated with a desired satisfaction probability that will be met in expectation by the policy executed by the controller. The planner builds upon the theory of constrained Markov Decision Processes and on techniques coming from the realm of formal verification. Our method is validated both in simulation and in outdoor tasks in which the robot autonomously traveled more than 7.5km.
Biography
Shams Feyzabadi is currently a PhD candidate at UC Merced working under the supervision of Prof. Carpin. His field of interest is mobile robotics and more specifically he focuses on motion planning considering multiple cost functions in non-deterministic environments. He received his M.Sc. from Jacobs University Bremen in Germany in 2010 and his B.Sc. from Iran University of Science and Technology in 2007.
Sept. 16
Building the Enterprise Fabric for Big Data with Vertica and Spark Integration, Jeff LeFevre, HPE Vertica
Abstract
Enterprise customers increasingly require greater flexibility in the way they access and process their Big Data. Their needs include both advanced analytics and access to diverse data sources. However, they also require robust, enterprise-class data management for their mission-critical data. This work describes our initial efforts toward a solution that satisfies the above requirements by integrating the HPE Vertica enterprise database with Apache Spark's open source computation engine. In this talk I will focus on our methods for fast and reliable bulk data transfers between Vertica and Spark with exactly once semantics. I will first describe the architectures of both systems, the challenges to guarantee exactly once semantics for data transfers, and the interesting tradeoffs among these challenges for our design. Specifically, our design enables parallel data transfer tasks that can tolerate task failures, restarts, and speculative execution; we show how this can be done without an external scheduler coordinating the reliable transfer between the two independent systems under these conditions. We believe this approach generalizes to the class of MapReduce systems. Lastly I will present performance results across several system configurations and datasets. Our integration provides a fabric on which our customers can get the best of both worlds: robust enterprise-class data management and analytics provided by Vertica, and flexibility in accessing and processing Big Data with Vertica and Spark.
Biography
Jeff LeFevre is a Software Engineer with HPE Vertica Big Data R&D in Sunnyvale, CA where he focuses on the integration with Spark. He joined Vertica in 2014 after completing his PhD from the Database Group at UC Santa Cruz. His dissertation focuses on physical design tuning for data management systems in the cloud. Prior to that he received an MS from the Systems Group at UC San Diego, and completed internships at Teradata, Google, and NEC Labs.
Sep 23
Enabling Analytics at AWS, Mehul Shah, Amazon Web Services
Abstract
With the ubiquity of data sources and cheap storage, today's enterprises want to collect and store a wide variety of data, even before they know what to do with it. Examples include IOT streams, application monitoring logs, point-of-sale transactions, ad impressions, mobile events, and more. This data is typically a mix of structured and unstructured, streaming and static, with varying degree of quality. Given this variety and the increasing need to be data-driven, customers want a choice of tools to leverage this data for business advantage. Towards this end, Amazon Web Services (AWS) offers a variety of fully-managed data services that can be easily composed given its service-oriented architecture. In this talk, we provide an overview of the breadth of data services available on AWS: storage, OLTP, data warehouse, and streaming. We give examples of how customers leverage and compose these to handle their big data use cases from traditional BI and analytics to real-time processing and prediction. Finally, we touch on some lessons from operating such services at scale.
Biography
Mehul is a software development manager in the Big Data division of AWS, contributing to the Redshift and Data Pipeline services. From 2011-2014, he was co-founder and CEO of Amiato, an ETL cloud service. Prior to that, he was a research scientist at HP Labs where his work spanned large-scale data management, distributed systems, and energy-efficient computing. He received his PhD in databases from UC Berkeley (2004), and MEng (1997) and BS in computer science and physics (1996) from MIT. He has received several awards including the NSDI 2016 Test of Time Award and SOSP 2007 best paper. In his spare time, he serves on the SortBenchmark committee.
Sept. 30
MacroBase: Analytic Monitoring for the Internet of Things, Peter Bailis, Stanford University
Abstract
An increasing proportion of data today is generated by automated processes, sensors, and devices—collectively, the Internet of Things (IoT). IoT applications’ rising data volume, demands for time-sensitive analysis, and heterogeneity exacerbate the challenge of identifying and highlighting important trends in IoT deployments. In response, we present MacroBase, a data analytics engine that performs statistically-informed analytic monitoring of IoT data streams by identifying deviations within streams and generating potential explanations for each. MacroBase is the first analytics engine to combine streaming outlier detection and streaming explanation operators, allowing cross-layer optimizations that deliver order-of-magnitude speedups over existing, primarily non-streaming alternatives. As a result, MacroBase can deliver accurate results at speeds of up to 2M events per second per query on a single core. MacroBase has delivered meaningful analytic monitoring results in production, including an IoT company monitoring hundreds of thousands of vehicles.
Biography
Peter Bailis is an assistant professor of Computer Science at Stanford University. Peter's research in the Future Data Systems group (http://futuredata.stanford.edu/) focuses on the design and implementation of next-generation, post-database data-intensive systems. His work spans large-scale data management, distributed protocol design, and architectures for high-volume complex decision support. He is the recipient of an NSF Graduate Research Fellowship, a Berkeley Fellowship for Graduate Study, best-of-conference citations for research appearing in both SIGMOD and VLDB, and the CRA Outstanding Undergraduate Researcher Award. He received a Ph.D. from UC Berkeley in 2015 and an A.B. from Harvard College in 2011, both in Computer Science.
Oct. 7
Apache SystemML: Declarative Machine Learning at Scale, Niketan Pansare, IBM Almaden Research Center
Abstract
Scalable machine learning is ubiquitous in virtually every industry ranging from insurance, manufacturing, finance, and health sciences. Expressing and and running machine learning algorithms for varying data characteristics and at scale is challenging. In this talk, we will discuss our experience in building Apache SystemML, peak at challenging optimization and implementation strategies in exploiting data-parallel platforms such as MapReduce and Spark, and provide performance and scalability insights.
Biography
Niketan Pansare works at IBM Research Almaden, on advanced information management systems that include analytics, distributed data processing platforms, hardware acceleration, as well as the application of it in mobile and cloud. At a high level, his research involves developing statistical models and building systems for analyzing large-scale and heterogenous data. Prior to joining IBM, Niketan was a PhD student at Rice University where he was advised by Dr. Chris Jermaine. His PhD thesis is titled "Large-Scale Online Aggregation Via Distributed Systems."
Oct. 14
Working around the CAP Theorem Vijayshankar Raman, IBM Almaden Research Center
Abstract
CAP theorem is a painful reality that all distributed systems have to deal with -- they must either assume a tightly coupled setting, or have inconsistent global state. But real-world applications never have tightly coupled components. Instead, they rely on an elaborate compensation logic that is built into the application program, usually outside the boundary of a database transaction. We present a method to achieve serializable consistency in a loosely coupled setting, by allowing such compensation logic to be part of the transaction itself, and serializing each transaction to a point_after its commit.
Biography
Vijayshankar Raman is a Research Staff Member in the database group at the IBM Almaden Research Center, working on hybrid transaction and analytic processing.
Oct. 21
Bridging the I/O Gap between Spark and Scientific Data Formats on Supercomputer, Jialin Liu, Lawrence Berkeley National Lab
Abstract
Spark has been tremendously powerful in performing Big Data analytics in distributed data centers. However, using the Spark framework on HPC systems to analyze large-scale scientific data has several challenges. For instance, the parallel file systems are shared among all computing nodes in contrast to shared-nothing architectures. Another challenge is in accessing data stored in scientific data formats, such as HDF5 and NetCDF, that are not natively supported in Spark. Our study focuses on improving I/O performance of Spark on HPC systems for reading large scientific data arrays, e.g., HDF5/netCDF. We select several scientific use cases to drive the design of an efficient parallel I/O API for Spark on supercomputer, called H5Spark. We optimize the I/O performance, taking into account the Lustre file system striping. We evaluate the performance of H5Spark on Cori, a Cray XC40 system, located at NERSC/LBNL and compared the I/O performance with MPI and NASA’s SciSpark. The developed H5Spark has enabled the success of the largest PCA run on supercomputer and been used by various national labs. It’s now endorsed by HDF company for further development.
Biography
Jialin Liu is a research engineer in Lawrence Berkeley National Lab. He joined LBNL shortly after receiving his Ph. D. in computer science from Texas Tech University in 2015. Before that, he got his B. S. in computer science in 2011. His research interests are parallel I/O and scientific data management (typically millions of files and TBs of data). Recently, he has been exploring object-based big science data management and I/O formats design for Astronomy dataset.
Nov. 4
Nonconvex Optimization by Complexity Progression, Hossein Mobahi, Google Research
Abstract
A large body of machine learning problems require minimization of a nonconvex objective function. For some of these problems, local optimization techniques (such as gradient descent, Newton method, etc.) may converge slowly or get stuck in suboptimal solutions. In this talk I describe an alternative approach for tackling nonconvex optimization. The idea is to start from a simpler optimization problem, and solving that. Then we progressively transform that objective function to the actual one, while tracking the path of the minimizer. While this general idea has been used for a long while, its construction has been quite heuristic. Specifically, there is no principled and theoretically justified answer about how to choose the initial (simplified) problem and how to transform that to the actual problem. The success of this technique drastically depends on these choices. In this talk I argue that Weierstrass transform (Gaussian convolution) is a sensible choice for creating simpler problems, and support this claim mathematically. I present the application of this method to problems in deep learning and image registration.
Biography
Hossein Mobahi (http://people.csail.mit.edu/hmobahi) is a research scientist at Google, Mountain View. His research interests include machine learning, optimization, computer vision, and especially the intersection of the three. Prior to Google, he was a postdoctoral researcher in the Computer Science and Artificial Intelligence Lab. (CSAIL) at MIT. He obtained his PhD from the University of Illinois at Urbana-Champaign (UIUC) in 2012.
Nov. 18
Bootstrap and Uncertainty Propagation: New Theory and Techniques in Approximate Query Processing, Kai Zeng, Microsoft Research
Abstract
Sampling is one of the most commonly used techniques in Approximate Query Processing (AQP)--an area of research that is now made more critical by the need for timely and cost-effective analytics over “Big Data”. The sheer amount of data and the complexity of analytics pose new challenges to sampling-based AQP, calling for innovations in various tech aspects. These include: how to estimate the errors of general SQL queries with ad-hoc user defined functions if computed on samples? How to better present the approximate query results to the user? How to build the database engines to be more suitable for approximate query processing? In this talk, I will present a series of my work which answers the important questions mentioned above. We will see: (1) An automated statistics technique--bootstrap can be integrated with relational algebra theory and database systems, and provides accuracy estimation support for general OLAP queries. (2) With bootstrap error estimation technique in combination with a novel uncertainty propagation theory, OLAP query processing can shift to an incremental execution engine, which provides a smooth trade-off between query accuracy and latency, and fulfills a full spectrum of user requirements from approximate but timely query execution to a more traditional accurate query execution.
Biography
Kai Zeng is a senior scientist at Cloud and Information Service Lab, Microsoft. His research interest lies in large scale data intensive systems. He received his PhD in Database from UCLA in 2014. He used to work at AMPLab UC Berkeley as a postdoc researcher. He has won several awards, including SIGMOD 2012 best paper award and SIGMOD 2014 best demo award.
Dec. 2
Modeling and Fast Numerical Methods for Fractional Partial Differential Equations, Hong Wang, University of South Carolina
Abstract
Traditional second-order diffusion PDEs model Fickian diffusion processes, in which the particles follow Brownian motion. However, many diffusion processes were found to exhibit anomalous diffusion behavior, in which the probability density functions of the underlying particle motions are characterized by an algebraically decaying tail and so cannot be modeled properly by second-order diffusion PDEs. Fractional PDEs provide a powerful tool for modeling these problems, as the probability density functions of anomalous diffusion processes satisfy these equations. Fractional PDEs present new difficulties that were not encountered in the context of integer-order PDEs. Computationally, the numerical methods for space-fractional PDEs generate dense matrices. Direct solvers were traditionally used, which require $O(N^3)$ computations per time step and $O(N^2)$ memory, where $N$ is the number of unknowns. We present the recent developments of accurate and efficient numerical methods for space-fractional PDEs, which has optimal storage and almost linear computational complexity. Numerical experiments are presented to show the strength of the numerical methods developed.
Biography
Hong Wang is a professor in the Department of Mathematics at the University of South Carolina. He received his BS Computational Mathematics from Shandong University, China in 1982, and PhD in Mathematics from the University of Wyoming in 1992. He had postdoctoral experience at the Department of Mathematics and Institute for Scientific Computation at Texas A&M University, before he became a faculty member at University of South Carolina. His research fields cover accurate and efficient numerical approximations and scientific computations, mathematical modeling and numerical and mathematical analysis of advection-diffusion equations, multiphase-multicomponent compositional flow and transport, space-time fractional partial differential equations and related nonlocal models.
__
Spring 2016
Jan. 22
No seminar
Abstract
TBA
Biography
TBA
Jan. 29
Monitoring Entire HPC Centers: the Sonar Project at LLNL, Todd Gamblin, Lawrence Livermore National Laboratory
Abstract
Increasingly, performance variability is an obstacle to understanding the throughput of large-scale supercomputers. Two runs of the same code, on the same system, may yield vastly different runtimes, depending on compiler flags, system noise, dynamic scheduling, and shared resources such as memory, filesystems and networks. Understanding an application's performance characteristics requires an increasingly large number of trial runs and measurements. Analyzing performance measurements from such runs is a data-intensive task. To address these issues, Livermore Computing is deploying Sonar, a "big data" cluster that will store and analyze performance data from LLNL’s entire HPC center. Sonar aggregates measurements from the network fabric, filesystem nodes, cluster nodes, applications. It will serve as a central data warehouse for measurements collected by tools. We will give an overview of the Sonar cluster and the tools we have integrated with it. We will also discuss some early techniques for analyzing performance data gathered from this system.
Biography
Todd is a computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory. His research focuses on scalable tools for measuring, analyzing, and visualizing performance data from massively parallel applications. Todd is also involved with many production projects at LLNL. He works with Livermore Computing’s Development Environment Group to build tools that allow users to deploy, run, debug, and optimize their software for machines with million-way concurrency. Todd received his Ph.D. in computer science from the University of North Carolina at Chapel Hill in 2009. His dissertation investigated parallel methods for compressing and sampling performance measurements from hundreds of thousands of concurrent processors. He received his B.A. in Computer Science and Japanese from Williams College in 2002. He has also worked as a software developer in Tokyo and held research internships at the University of Tokyo and IBM Research.
Feb. 5
MAGIC: bringing lawn irrigation into the IoT movement, Daniel Winkler, UC Merced
Abstract
Lawns make up the largest irrigated crop by surface area in North America, and carries with it a demand for over 9 billion gallons of freshwater each day. Despite recent developments in irrigation control and sprinkler technology, state-of-the-art irrigation systems do nothing to compensate for areas of turf with heterogeneous water needs. In this work, we overcome the physical limitations of the traditional irrigation system with the development of a sprinkler node that can sense the local soil moisture, communicate wirelessly, and actuate its own sprinkler based on a centrally-computed schedule. A model is then developed to compute moisture movement from runoff, absorption, and diffusion. Integrated with an optimization framework, optimal valve scheduling can be found for each node in the space. In a turf area covering over 10,000 square feet, two separate deployments spanning a total of 7 weeks show that MAGIC can reduce water consumption by 23.4% over traditional campus scheduling, and by 12.3% over state-of-the-art evapotranspiration systems, while substantially improving conditions for plant health. In addition to environmental, social, and health benefits, MAGIC is shown to return its investment in 16-18 months based on water consumption alone.
Biography
Daniel Winkler received his BS in Computer Science Engineering with honors from UC Merced in 2013. An ACM member, he since has been pursuing his PhD under advisement of Dr. Alberto Cerpa in UC Merced's ANDES Lab. Although his current research focuses on intelligent design and management of turf irrigation systems through the use of embedded devices, Daniel also has a growing interest in general resource management applications.
Feb. 10
Perceiving and Interacting with Images, Ming-Ming Cheng, Nankai University
Abstract
In this talk, I will introduce our latest research in image scene understanding and interactive technologies. Our first line of research aims at rapid image scene understanding based on visual attention mechanism (IEEE TPAMI 2015, IEEE CVPR 2014 Oral). This is an area where people often have diverse feelings: some researchers believe that it is a principled research direction, while others might doubt its robustness. Instead of specific algorithm design, I would like to highlight how these algorithms can be robustly used in various applications, including image composition, photo montage, image retrieval, object detection, semantic segmentation, and even deep learning. Our second line of research aims at intelligent image manipulation mechanism. We try to explore smart image manipulation techniques for easily obtaining annotated data during users’ nature interaction with the real world (ACM TOG 2014, ACM TOG 2015), which is partially motivated by the growing requirement of high quality labeled training data (expensive to be collected) for scene understanding.
Biography
Ming-Ming Cheng is an associate professor with Department of Computer Science, Nankai University, China. He received his PhD degree from Tsinghua University, China, in 2012 under supervise of Prof. Shi-Min Hu, and working closely with Prof. Niloy Mitra. Then he spent 2 years as research fellow in UK, working with Prof. Philip Torr in Oxford. Dr. Cheng’s research primarily focus on algorithmic issues in image scene understanding, including image segmentation, salient object detection, image editing, objectness proposal, etc. He has published several highly cited papers in ACM TOG, IEEE TPAMI etc. See also: http://mmcheng.net
Feb. 12
How to Get Your CVPR Paper Rejected?, Ming-Hsuan Yang, UC Merced
Abstract
In this talk, I will share my experience in how to publish papers in top conferences and journals. In particular, I will discuss the pitfalls and common mistakes of submitted papers from the perspectives of area chairs and associate editors.
Biography
Ming-Hsuan Yang is an associate professor in Electrical Engineering and Computer Science at University of California, Merced. He received the PhD degree in Computer Science from the University of Illinois at Urbana-Champaign in 2000. He serves as an area chair for several conferences including IEEE Conference on Computer Vision and Pattern Recognition, IEEE International Conference on Computer Vision, European Conference on Computer Vision, Asian Conference on Computer, IEEE International Conference on Pattern Recognition, and AAAI National Conference on Artificial Intelligence. He serves as a program co-chair for IEEE International Conference on Computer Vision in 2019, Asian Conference on Computer Vision in 2014 and general co-chair in 2016. He serves as an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence (2007 to 2011), International Journal of Computer Vision, Computer Vision and Image Understanding, Image and Vision Computing and Journal of Artificial Intelligence Research. Yang received the Google Faculty Award in 2009, and the Faculty Early Career Development (CAREER) award from the National Science Foundation in 2012.
Feb. 19
Vertical Partitioning for Query Processing over Raw Data Weijie Zhao UC Merced
Abstract
Traditional databases are not equipped with the adequate functionality to handle the volume and variety of ``Big Data''. Strict schema definition and data loading are prerequisites even for the most primitive query session. Raw data processing has been proposed as a schema-on-demand alternative that provides instant access to the data. When loading is an option, it is driven exclusively by the current-running query, resulting in sub-optimal performance across a query workload. In this talk, we investigate the problem of workload-driven raw data processing with partial loading. We model loading as fully-replicated binary vertical partitioning. We provide a linear mixed integer programming optimization formulation that we prove to be NP-hard. We design a two-stage heuristic that comes within close range of the optimal solution in a fraction of the time. We extend the optimization formulation and the heuristic to pipelined raw data processing, scenario in which data access and extraction are executed concurrently. We provide three case-studies over real data formats that confirm the accuracy of the model when implemented in a state-of-the-art pipelined operator for raw data processing.
Biography
Weijie Zhao is a Ph.D. student in the EECS department of UC Merced, working with Prof. Florin Rusu. He received his B.S. from East China Normal University, China. His research interest includes scientific data processing and database theories.
Feb. 26
Topological methods for motion planning and trajectory analysis, Florian Pokorny, UC Berkeley
Abstract
One key open problem in robotics is the question of how a robot can reason about the space of possible trajectories. In particular, how can a robot determine not just a single shortest path between two points, but develop an understanding of continuous deformation classes (homotopy classes) of trajectories in configuration space? Over the last 5 years, computational geometry techniques for computing topological information from data have advanced dramatically. We will discuss our recent work on using persistent homology to determine a collection of homotopy inequivalent trajectories in robot configuration spaces and will present recent work on topologically clustering large databases of trajectories into consistent clusters with applications to the learning of robot control policies and motion primitives.
Biography
Florian Pokorny received a BSc (Honours) Mathematics from the University of Edinburgh, UK in 2005. He then obtained a Master of Advanced Studies in Mathematics (Part III of the Mathematical Tripos) from the University of Cambridge, UK before completing his PhD in pure mathematics under supervision of Prof. Michael Singer at the University of Edinburgh in 2011 in the field of differential geometry. Following his PhD, he has refocused his research on Robotics and Machine learning problems, and particularly topological methods, robotic manipulation and motion planning and joined the Center for Autonomous Systems at KTH Royal Institute of Technology, Stockholm, Sweden working with Prof. Danica Kragic. Since May 2015, he has joined the AMPLab & Automation Lab at UC Berkeley conducting postdoctoral research with Prof. Ken Goldberg and his group.
March 4
EECS/CITRIS: Robot Intelligence in a Cloud-Connected World, James Kuffner, Toyota Research Institute
Abstract
Robotics is currently undergoing a dramatic transformation. High-performance networking and cloud computing has radically transformed how individuals and businesses manage data, and is poised to disrupt the state-of-the-art in the development of intelligent machines. This talk explores the long-term prospects for the future evolution of robot intelligence based on search, distributed computing, and big data. Ongoing research on autonomous cars and humanoid robots will be discussed in the context of how cloud-connectivity will enable future robotic systems to be more capable and useful.
Biography
James Kuffner is a Roboticist at the Toyota Research Institute and an Adjunct Associate Professor at the Robotics Institute, Carnegie Mellon University. He received a Ph.D. from the Stanford University Dept. of Computer Science Robotics Laboratory in 1999, and was a Japan Society for the Promotion of Science (JSPS) Postdoctoral Research Fellow at the University of Tokyo working on software and planning algorithms for humanoid robots. He joined the faculty at Carnegie Mellon University's Robotics Institute in 2002. He has published over 125 technical papers, holds more than 40 patents, and received the Okawa Foundation Award for Young Researchers in 2007. In 2009, James joined Google as part of the initial engineering team building Google’s self-driving car. He is known for introducing the term "Cloud Robotics" in 2010 to describe how network-connected robots could take advantage of distributed computation and data stored in the cloud. In 2014, he was appointed head of Google’s Robotics division, which he co-founded along with Andy Rubin. In 2016, he joined the newly created Toyota Research Institute as the Area Lead for Cloud Computing.
March 11
EECS/CITRIS: We are all makers, Dale Doughtery, Maker Media
Abstract
Maker Media is a global platform for connecting Makers with each other, with products and services, and with our partners. Through media, events and ecommerce, Maker Media serves a growing community of Makers who bring a DIY mindset to technology. Whether as hobbyists or professionals, Makers are creative, resourceful and curious, developing projects that demonstrate how they can interact with the world around them. The launch of Make: magazine in 2005, followed by Maker Faire in 2006, jumpstarted a worldwide Maker Movement, which is transforming innovation, culture and education. Located in San Francisco, CA, Maker Media is the publisher of Make: magazine and the producer of Maker Faire. It also develops “getting started” kits and books that are sold in its Maker Shed store as well as in retail channels.
Biography
Dale Dougherty is the founder and Executive Chairmen of Maker Media Inc. In 2005, Maker Media launched Make Magazine and Maker Faire, which held its first events in San Francisco in 2006. He has developed a maker ecosystem, serving the needs of makers as they seek out product support, startup advice, and funding avenues. His idea for Make Magazine came from his experiences with the Hacks Books and then recognized that hackers were playing with hardware and more broadly, they were looking at how to hack the world, not just computers.
March 18
Dot-Product Join: An Array-Relation Join Operator for Big Model Analytics, Chengjie Qin, UC Merced
Abstract
Big Data analytics has been approached exclusively from a data-parallel perspective, where data are partitioned to multiple workers – threads or separate servers – and model training is executed concurrently over different partitions, under various synchronization schemes that guarantee speedup and/or convergence. The dual – Big Model – problem that, surprisingly, has received no attention in database analytics, is how to manage models with millions if not billions of parameters that do not fit in memory. In this talk, I will introduce the first secondary storage array-relation dot-product join operator between a set of sparse arrays and a dense relation. The dot-product join operator incurs minimal overhead when sufficient memory is available and gracefully degrades when memory resources become scarce. Overall, the dot-product join operator achieves an order of magnitude reduction in execution time for Big Model analytics over alternative in-database solutions.
Biography
Chengjie Qin is a PhD candidate in EECS department advised by Florin Rusu. His research focuses on supporting large-scale data analytics in databases. He received his Bachelor of Science degree in Computer Science in 2011 from Fuzhou University, China.
March 25
No seminar (Chesar Chavez Holiday)
Abstract
TBA
Biography
TBA
April1
EECS/CITRIS: Platypus Cooperative Robotic Boats: Learning to Balance R&D and Productization, Paul Scerri, Platypus LLC
Abstract
After nearly 20 years in academia featuring several papers with "real-world" in the title, I recently left academia to found a company that commercializes one of our robots. The small, autonomous watercraft have the ability to dramatically change how water data is collected. In this talk, I will describe some of the challenges and issues I've encountered as we take cutting edge technology from this community and put it in the hands of end users. In the course of the effort, I've learned that research and commercial success are not always compatible, but that some of the same creative skills and ability to deal with failure are essential. Over time, we've found a balance between research and commercialization that helps both move forward in parallel, as well as finding different business models that work for technology on the edge of research.
Biography
Dr. Scerri is co-Founder President of Platypus, LLC. and the Director of the Perceptronics Solutions Robotics Lab. Prior to this he was an Associate Research Professor Carnegie Mellon University Robotics Institute. The focus of his research while at CMU was multi-agent and multi-robot systems. In his current roles in industry, the emphasis is on taking state-of-the-art research and applying it to real problems, with a specific focus on making the collection of important data about an environment less expensive, more reliable and more accessible.
April 7
Making Information Retrieval Easier: Directing Exploratory Search over 50 Million Documents by Interactive Intent Modeling, Jaakko Peltonen, University of Tampere and Aalto University
Abstract
Researchers must navigate big data. Current scientific knowledge includes 50 million published articles. How can a system help a researcher find relevant documents in their field? We introduce SciNet, an interactive search system that anticipates the user's search intents by estimating them from the user's interaction with the interface. The estimated intents are visualized on an intent radar, a radial layout that organizes potential intents as directions in the information space. The system assists users to direct their search by allowing feedback to be targeted on keywords representing the potential intents. Users can provide feedback by moving the keywords on the intent radar. The system then learns and visualizes improved estimates and corresponding documents. The resulting user models are explicit open user models curated by the user during the interactive information seeking. SciNet has been shown to significantly improve users' task performance and the quality of retrieved information without compromising task execution time. We also show how user models learned in SciNet can be used to help cold-start recommendation in another system, the CoMeT talk management system, by cross-system user model transfer across the systems.
Biography
Jaakko Peltonen is an Associate Professor of statistics (data analysis) at the School of Information Sciences, University of Tampere, Finland where he leads the Statistical Machine Learning and Exploratory Data Analysis group; he is also currently an academy research fellow at Aalto University, Finland, where he is a PI of the Probabilistic Machine Learning research group. He is an associate editor of Neural Processing Letters and an editorial board member of Heliyon. He has served in organizing committees of seven international conferences and one international summer school, has served in program committees of 31 international conferences/workshops and has performed referee duties for numerous international journals and conferences. He is an expert in statistical machine learning methods for exploratory data analysis, visualization of data, and learning from multiple sources.
April 8
Complex-valued Linear Layers for Deep Neural Network-based Acoustic Models for Speech Recognition, Zak Shafran, Google
Abstract
In recent years, deep neural networks have proven to be highly effective for acoustic modeling in speech recognition. However, the input to the acoustic model consists of hand-crafted features, namely, logarithm of the energy of the Mel-weighted filter bank (log-mel). Mel-weighted filters were developed about 4 decades ago and were inspired by human perception. Apart from the possibility that they may not be optimal features for automatic speech recognition, the log-mel features strip information from the speech signal that may be useful especially for jointly modeling de-reverberation and beam-forming within the neural networks. As an alternative to log-mel features, we investigate using complex-valued frequency transform of the speech frames directly as inputs to the acoustic models and to utilize the complex-valued inputs we employ complex-valued linear layers whose parameters are learned jointly with the rest of the acoustic model. In this talk, we will discuss the properties of these complex-valued layers and demonstrate their advantage on a large speech recognition task.
Biography
Izhak Shafran is a speech and machine learning researcher, who has been working on acoustic modeling for speech recognition. Before joining Google, he was an Associate Professor and a member of the Center for Spoken Language Processing at OHSU, where he also focused on medical application of spoken language technology. He graduated from University of Washington in Seattle in 2001 and subsequently worked at AT\&T Research Labs at Florham Park with the speech algorithms group. In summer of 2006, he was a visiting professor at University of Paris-South, working at LIMSI. Subsequently, he was a research faculty at the Center for Language and Speech Processing (CLSP) in Johns Hopkins University. He received an NIH Career Development Award in 2010.
April 15
Online Aggregation On Raw Data, Yu Cheng, UC Merced
Abstract
Traditional in-situ data processing systems support immediate SQL querying over raw files but their performance across a query workload is limited, though, by the speed of full scans, tokenizing, and parsing of the entire file. Online aggregation (OLA) has been introduced as an efficient method for data exploration that identifies uninteresting patterns faster by continuously estimating the result of a computation during the actual processing---the computation can be stopped as early as the estimate is accurate enough to be deemed uninteresting. However, building an efficient OLA system has a high upfront cost of randomly shuffling and loading the data. In this talk, I introduce OLA-RAW, a novel system for in-situ processing over raw files that integrates data loading and online aggregation seamlessly while preserving their advantages---generating accurate estimates as early as possible and having zero time-to-query. We design an accuracy-driven bi-level sampling process over raw files and define and analyze corresponding estimators. The samples are extracted and loaded adaptively in random order based on the current system resource utilization. We implement OLA-RAW starting from a state-of-the-art in-situ data processing system and evaluate its performance across a variety of datasets and file formats. Our results show that OLA-RAW maximizes resource utilization across a query workload and dynamically chooses the optimal sampling and loading plan that minimizes each query's execution time while guaranteeing the required accuracy. The end result is a focused data exploration process that avoids unnecessary work and discards uninteresting data.
Biography
Yu Cheng is a computer science PhD candidate at UC Merced advised by Prof. Florin Rusu. His research focuses on in-situ data processing. He received his BS in Computer Science in 2005 from Wuhan University of Technology, Wuhan, China, and MS in 2008 in Computer Engineer from Huazhong University of SciTech , Wuhan, China. Since 2011, he has been working towards his PhD degree in large-scale data processing. He has published several research papers in the areas of Database system(in SIGMOD, TODS, SSDBM etc). He is a recipient of the Graduate Dean's Dissertation Fellowship 2016 and several fellowship awards during his Ph.D. study at UC Merced.
April 22
Exploring New Approaches for Mechanical Fruit Harvesting via Model-based Design, Stavros Vougioukas, UC Davis
Abstract
Mechanizing the hand harvesting of fresh market crops constitutes one of the biggest challenges to the sustainability of the U.S. fruit and vegetable industry. Depending on the crop, labor contributes up to 60% of the variable production cost, and recent labor shortages have led to loss of production and reduction of planted acreage in several crops. Innovation is desperately needed in the design of mass – shake-and-catch - harvesters, and selective fruit-picking robotic harvesters. This seminar will present the challenges related to mechanized harvesting and how concepts and tools from model-based design and robotics can be used to provide solutions. Regarding robotic fruit harvesters, most developed prototypes utilize multiple-degree-of-freedom arms, often kinematically redundant. The hypothesis is that as branches constrain fruit reachability, redundancy is necessary to navigate through branches and reach fruits inside the canopy. Modern commercial orchards increasingly adopt trees of SNAP architectures (Simple, Narrow, Accessible, and Productive). In this seminar results will be presented from a recent simulation study on linear fruit reachability (LFR) on high-density, trellised pear trees, when linear only motion was used to reach the fruits. Results based on digitized geometric tree models and fruit locations showed that 91.1% of the fruits were reachable after three “harvesting passes” with proper approach angles. This implies that for trees of SNAP-type architectures fruit reachability may not require complex and expensive arms with many degrees of freedom. For shake-and-catch harvesting, results based on a physics-based simulation of falling fruits will be shown, which suggest that when fruit-intercepting rods are inserted optimally into the tree canopies during shaking, the percentage of fruits hitting branches can be lowered by more than 50%. Such designs could enable mass - harvesting with low fruit damage, and, hence, provide mechanized harvesting solutions for some crops.
Biography
Dr. Stavros Vougioukas is an Assistant Professor of Biological and Agricultural Engineering at the University of California, Davis. He joined the Department in 2012 and his research group focuses on the development of robotic and automation systems for agricultural applications, with emphasis on mechanized harvesting of specialty crops. Dr. Vougioukas earned his Diploma in Electrical Engineering (1989) at Aristotle University, Greece. He undertook graduate studies in the US under a Fulbright Fellowship. He completed his MS (1991) at SUNY Buffalo and PhD (1995) at Rensselaer Polytechnic Institute, in Electrical, Computers and Systems engineering. His PhD thesis addressed force-guided assembly and robotic fine motion planning. He was a post-doctoral researcher for one year at the University of Parma, Italy. After his army service he became faculty at Aristotle University, Greece, where he worked on agricultural automation for 10 years.
April 29
No seminar
Abstract
TBA
Biography
TBA
Abstract
3D XPoint(TM) technology is a new class of non-volatile memory that can help turn immense amounts of data into valuable information in real time. With up to 1,000 times lower latency and exponentially greater endurance than NAND, 3D XPoint technology can deliver game-changing performance for data-intensive applications. Its ability to enable high-speed, high-capacity data storage close to the processor creates new possibilities for system architects and promises to enable entirely new applications. In this talk, I will give an introduction of the 3D XPoint™ technology to be featured on Optane SSD and DIMMs. Also, I will talk the needs and the challenges to address in hardware architecture and operating system to leverage this new technology. Lastly, I will introduce the NVM programming model used by NVM developers in future to address the ongoing proliferation of new NVM functionality and persistence features.
Biography
Dr. Chun-Yi, Su is a senior software engineer at Intel Cooperation. He is working on software innovations for 3D XPoint ™ technology. He received his Ph.D. in Computer Science from the Virginia Tech in 2014. He was a Research Scholar at Lawrence Livermore National Laboratory to study hybrid memory systems for future supercomputers in 2013. His research interests include memory and storage systems, high-performance computing, parallel and distributed architectures and applications; machine learning; performance/power monitoring, modeling, and analysis; tool support for parallel programming; power efficiency for parallel systems; and optimizing parallel and distributed I/O.
Fall 2015
Aug. 28
Re-architecting the Memory-Storage Stack with NVRAMs, Jishen Zhao, UC Santa Cruz
Abstract
NVRAMs promise new persistent memory technology, which combines attractive attributes from both main memory (fast, load/store interface) and storage (data persistence). However, supporting persistence in the memory requires rethinking of memory system design; the well-studied memory hierarchy design is no longer well-suited to this new scenario. This talk will show our recent work on optimizing the performance of persistent memory systems with new memory control schemes and memory hierarchy design.
Biography
Jishen Zhao is an assistant professor in Computer Engineering of UCSC. Her research primarily falls in the areas of computer architecture and electronic design automation, with an emphasis on memory and storage system design, energy efficiency, and high-performance computing. Before joining UCSC, she was a Research Scientist at HP Labs, Palo Alto. Her research on persistent memory received the Best Paper Honorable Mention Award at MICRO 2013.
Sept. 4
Learning Plan Abstractions and Coordination for Agents in Real-Time Complex Game Environments, Arnav Jhala, UC Santa Cruz
Abstract
Player modeling in video games with complex environment and task models is a broad research area. This talk covers two aspects of player modeling : learning plan abstractions from observations of expert humans, and models of cooperation. First, I discuss learning plan abstractions of expert players in StarCraft. Real-Time Strategy (RTS) gameplay exhibits both cognitive complexity and task environment complexity. Expert StarCraft gameplay involves many cognitive processes including estimation, anticipation, and adaptation. One approach to handling this complexity is to learn plan structures from observation of expert gameplay in competitive settings. We show that application of Generalized Sequence Mining algorithms to StarCraft replays results in automated extraction of tactical and strategic patterns that can be encoded in HTN-like plan structures. Next, I discuss belief models of inconsistent collaborators in a multi-agent domain. Maintaining an accurate set of beliefs in a partially observable scenario, particularly with respect to other agents operating in the same space, is a vital aspect of multi-agent planning. We analyze how the beliefs of an agent can be updated for fast adaptivity to changes in the behavior of an unknown teammate. Our results on a variation of the pursuit domain suggest the possibility of approximating a higher-level model by utilizing a belief distribution over a set of lower-level behaviors, particularly when the belief update strategy identifies changes in the behavior in a responsive manner.
Biography
Arnav Jhala is an Associate Professor of Computational Media at the University of California, Santa Cruz. His research interests lie at the intersection of artificial intelligence and digital media, particularly in the areas of Computational Cinematography, Reasoning under Uncertainty in Complex Real-time Domains, and Computational Storytelling. At UCSC he directs the Computational Cinematics Studio, and teaches graduate and undergraduate courses in game design, game AI, game engine programming, interactive narratives, and computational cinematography. Arnav holds Ph.D. and M.S. degrees in Computer Science from North Carolina State University, USA (2004, 2009), and B.Eng. in Computer Engineering from Gujarat University, India (2001). He has previously worked at the IT University of Copenhagen, Virtual Heroes, Duke University, the Institute of Creative Technologies at the University of Southern California, and the Indian Space Research Organization (ISRO).
Sept. 11
DeepDive: A Data System for Macroscopic Science, Christopher Re, Stanford University
Abstract
Many pressing questions in science are macroscopic, as they require scientists to integrate information from numerous data sources, often expressed in natural languages or in graphics; these forms of media are fraught with imprecision and ambiguity and so are difficult for machines to understand. Here I describe DeepDive, which is a new type of system designed to cope with these problems. It combines extraction, integration and prediction into one system. For some paleobiology and materials science tasks, DeepDive-based systems have surpassed human volunteers in data quantity and quality (recall and precision). DeepDive is also used by scientists in areas including genomics and drug repurposing, by a number of companies involved in various forms of search, and by law enforcement in the fight against human trafficking. DeepDive does not allow users to write algorithms; instead, it asks them to write only features. A key technical challenge is scaling up the resulting inference and learning engine, and I will des