Skip to Main Content

2015 Poster Session Abstracts

The Department of Mathematics and Statistics

Title, Principal Investigator and Abstract for the 2015 A Look Ahead UMBC Faculty Poster Session

Efficient Multilevel Methods for Large-scale Optimization Problems Constrained by Partial Differential Equations

Andrei Draganescu

Optimization problems constrained by partial differential equations (PDEs) is a research area to which the scientific and engineering communities have devoted an increased level of effort over the last decade. This is due both to the tremendous advances in high-performance computing technologies and to its wide range of applicability, e.g., optimal design of manufacturing processes, history matching for petroleum reservoir simulations, data assimilation for weather prediction. However, just growth in computing power is insufficient for tackling PDE-constrained optimization problems at the same extreme scales at which the PDEs themselves can be solved: although current computing capabilities allow, in principle, for the numerical solution of PDEs with 10–100 billion unknowns, solving PDE-constrained optimization problems of comparable size still requires significant algorithmic development. In this presentation we highlight some of our efforts and challenges in developing, analyzing, and implementing efficient methods for solving large-scale optimization problems constrained by PDEs, with particular focus on the linear algebraic aspects of the solvers.


Overview of the UMBC High Performance Computing Facility

Matthias Gobbert

The UMBC High Performance Computing Facility (HPCF) is the community-based, interdisciplinary core facility for scientific computing and research on parallel algorithms at UMBC. Started in 2008 by more than 20 researchers from ten academic departments and research centers from all three colleges, it is supported by faculty contributions, federal grants, and the UMBC administration.This poster gives an overview of the capabilities that HPCF makes available to the campus and how to access them. See www.umbc.edu/hpcf for more information on HPCF and the projects using its resources.

The current machine in HPCF is the 240-node distributed-memory cluster maya. The newest components of the cluster are the 72 nodes with two eight-core 2.6 GHz Intel E5-2650v2 Ivy Bridge CPUs and 64 GB memory that include 19 hybrid nodes with two state-of-the-art NVIDIA K20 GPUs (graphics processing units) designed for scientific computing and 19 hybrid nodes with two cutting-edge 60-core Intel Phi 5110P accelerators.These new nodes are connected along with the 84 nodes with two quad-core 2.6 GHz Intel Nehalem X5550 CPUs and 24 GB memory by a high-speed quad-data rate (QDR) InfiniBand network for research on parallel algorithms. The remaining 84 nodes with two quad-core 2.8 GHz Intel Nehalem X5560 CPUs and 24 GB memory are designed for fastest number crunching and connected by a dual-data rate (DDR) InfiniBand network. All nodes are connected via InfiniBand to a central storage of more than 750 TB.


An Interdisciplinary Approach to understanding Neuromechanical Locomotion

Kathleen Hoffman

Lampreys are model organisms for vertebrate locomotion because they have the same types of neurons as higher-order vertebrates, but with fewer numbers. Lamprey locomotion requires combining the electrical activity in the spinal cord, that inervates muscle, which in turn contracts the body, propelling the animal through the water.The resulting motion exerts a force on the fluid, and the fluid exerts forces on the body. I will present results of a longterm interdisciplinary collaboration that combines mathematical models and computational fluid dynamics with biological and fluid experiments to understand locomotion through the water.


Statistical Methodology for the Assessment of Biosimilarity

Thomas Mathew

Biosimilars are biopharmaceutical drugs that are highly similar products or imitations of already approved biological drugs. The Biologics Price Competition and Innovation Act of 2009 created a pathway for biosimilar drug approval under the Patient Protection and Affordable Care Act. Unlike generic drugs, it is difficult to develop exact copies of biological products due to the complexity of the protein structure. For the approval of generic drug products, the common method is to assess average bioequivalence in terms of drug absorption through bioequivalence studies. Such a criterion is not applicable for biological products because of its molecular complexity. For demonstrating biosimilarity, specific guidance and criteria have been unavailable until the FDA released a draft guidance document in May 2014. Even though average bioequivalence has been deemed inappropriate for assessing biosimilarity, the FDA guidance document does recommend average bioequivalence as the statistical criterion to be used. There is no consensus on what statistical criterion should be used, even though the assessment of biosimilarity is now a very active area of research. In the present research, we propose to use the concept of tolerance limits for assessing biosimilarity. The data to be used consist of the AUC; i.e., the area under the plasma concentration vs time curve, obtained after administering the drug to healthy volunteers. We consider the difference between the AUC responses for the brand name drug and its copy, and construct an upper tolerance limit for the absolute difference. If the upper tolerance limit is small according to some regulatory guideline, we conclude biosimilarity. An advantage of this approach is that we have a criterion that takes into account the entire distribution, and not just the averages. Our research deals with a rigorous justification of the use of such a criterion, and the development of the relevant statistical methodology.


A Comparison of some linear regression models for prediction of daily precipitation in Missouri River Basin

Nagaraj Neerchal

Downscaling is the process of bringing the data provided by Global Climate Models (GCM) from a coarser resolution (~100km) to a
finer resolution (~10km). It is an important step in applications that assess the impact of large scale climate changes on local conditions. In this poster, we compare the performance of three types of linear models to improve the quality of daily spatially interpolated downscaled precipitation for prediction purposes.The daily time series is from a mixed distribution with positive Probability of zero precipitation and shows strong seasonality. To this data we fit three types of models: a. a simple linear regression, b. standard Tobit model, and c. two-part model with the Binary model for the first part and a lognormal for the second part. We compare model performance using the proportions of matched dry days and adjusted MSE scaled by the proportion of matched rainy days. We empirically demonstrate that the two part model has desirable characteristics from prediction point of view.


Generalized Linear Models for Data with Direct and Proxy Observations with Gerontological Application

Nagaraj Neerchal

In this project, we review three different approaches for statistical analysis of data sets with monotone missing data patterns. Such patterns occur in gerontological data sets when the patients become unable to provide responses by themselves due to advancing severity of their conditions. It is common to use proxy responses by a relative or caregiver in these cases. We are investigating statistical models and analysis of data sets which can incorporate patient responses or proxy (or in some cases both) in the same framework.


Mathematical Modeling Dynamics of Physiological Systems

Bradford Peercy

We provide an overview of the modeling performed on the various physiological systems in the Peercy group. Biological application examples are stochastic simulation and reductive analysis of cardiac cell calcium dynamics, intracellular signaling via cAMP in pancreatic beta cells and computational islets, activation pathway parameter estimation of transcription factor nuclear translocation in skeletal muscle to control fiber type and muscle atrophy, extracellular and intracellular signaling for clustered cell migration, and how vitamin D levels vary in across genotype and impact macrophage antimicrobial peptide production. Our goal is predictive experiments through mathematical analysis.


Hierarchical Schur complement preconditioner for the stochastic Galerkin FEM

Bedrich Sousedik

Use of the stochastic Galerkin finite element methods leads to large systems of linear equations.These systems are typically solved iteratively.We propose a preconditioner which takes advantage of the recursive hierarchy in the structure of the global system matrices.Neither the global matrix, nor the preconditioner need to be formed explicitly.The ingredients include only the number of stiffness matrices from the truncated Karhunen-Loeve expansion and a preconditioner for the mean-value problem.The performance is illustrated by numerical experiments.

Return to Event Schedule