Friday, September 6, 2:30 - 3:20pm

Newton 203

SUNY Geneseo, Class of 2012

The past two decades have been an exciting time in biomedicine, with the completion of the human genome project and the beginning of the "big data" era. This has made collaborations between medical professionals, biologists, computer scientists and mathematicians both commonplace and necessary. These interdisciplinary efforts have resulted in scientific breakthroughs in our understanding of fundamental biological principles, such as the role of DNA and how it is regulated, as well as increased our understanding of disease mechanisms. A wide variety of mathematics and other computational techniques have played important roles in driving these biomedical advancements, ranging from differential equation models to data processing and compression methods (such as wavelet transforms) to optimization methods and statistical analysis. I will give a broad overview of the exciting and active areas of computational biomedical research, as well as go more in depth into my particular area of research in cancer genomics.

Thursday, September 19, 4:00 - 4:50pm

Newton 203

Suppose P is a convex polygon with integer points. Pick's Theorem says that the area of P can be computed by counting integer points on the interior and the boundary of P. I will discuss Pick's theorem as well as a generalization to higher dimensions via Ehrhart polynomials. If time permits, I will also discuss some applications.

Friday, October 4, 4:00 - 4:50pm

Newton 203

SUNY Geneseo, Class of 2011

Suppose that *y = Ax*, with *A ∈ R*^{n×m}, *y ∈ R*^{n}, *x ∈ R*^{m}, and *n < m*. The problem of reconstructing *x* given *y* and *A* is generally ill-posed. In the case where *x* has sparse support (|supp(*x*)| ≪ *m*), compressed sensing literature provides algorithms, many of which use l_{1} minimization, by which we can accurately reconstruct *x* under certain sufficient conditions. In the case where supp(*x*) is partially known, the Modified-CS algorithm can be used to reconstruct *x*, and the sufficient conditions for reconstruction can be relaxed. Weighted-l_{1} minimization uses a combination of standard CS and Modified-CS. We present new sufficient conditions for exact reconstruction by weighted-l_{1} minimization, compare these to known bounds, and report results of simulations. An application to image processing is included as a motivating example.

Monday, October 28, 4:30 - 5:20pm

Newton 201

The technological advances of the recent decades have made it possible for huge amounts of data to be collected in all walks of life. In fact, it’s quite common these days to hear people say that we live in an era of the so-called “Big Data”. Interestingly, large complex data come in various shapes, sizes and characteristics. In this lecture, I will present a simple taxonomy of large/massive data sets, and I will briefly highlight some of the mathematical and statistical tools that are commonly used in data mining and machine learning to extract meaningful information from different forms of complex data. I intend to touch on data from fields such as image processing, speech/audio recognition, DNA micro-array analysis, classification of text documents, intrusion detection and consumer statistics, just to name a few. My focus throughout the lecture will be to show and possibly demonstrate computationally that while some of the mathematical/statistical tools needed to tackle these complex data structures are very novel and cutting edge, some are just straightforward applications or gentle extensions of the traditional statistical arsenal.

Wednesday, November 20, 2:30 - 3:45pm

Newton 202

A lively overview of over two thousand years of calculus history. Not only who-did-what along the way, but the cultural and sociological causes and effects of the calculus. Strongly recommended for anyone who has taken or is taking calculus.

Thursday, November 21, 4:00 - 4:50pm

Newton 203

Since computationally intensive techniques have been easily available, have traditional methods for conducting data analysis been losing confidence? Are developing curricula in teaching statistics making errors with randomization- and bootstrapping-based techniques? Did we reject the logic of conventional inference and statistical thinking as data becomes ginormous? Let’s make a decision at 5% level using simulation studies.

Students are encouraged to bring devices with internet access (laptop, smart phone etc).

Monday, November 25, 3:00 - 3:50pm

Newton 214

This talk discusses my study in progress, which centers on Hispanic students’ math achievement using data from the Grade 12 National Assessment of Educational Progress (NAEP). I explain the rationale for focusing on functions as a representation of math achievement and highlight methods I use to create a profile of achievement for first-generation college-bound Hispanic students using their performance on function items. In addition, I discuss the implications my findings have for policymakers and others interested in addressing achievement gaps in mathematics.

Monday, December 2, 3:00 - 3:50pm

Newton 214

Students recall and model the square root as the length of one side of a square. But what if the square is not a perfect square? This collection of activities explores the square roots of not-so-perfect squares and develops an algorithm to express the not-so-perfect square root as a rational value.

Wednesday, December 4, 2:30 - 3:20pm

Newton 203

In an era of technology, students benefit from instant feedback enabling them to focus more on analysis and drawing conclusions. When utilized properly, technology can be a useful tool for helping students learn new statistical concepts. What are other methods that teachers can employ in a classroom that will aid and support students as they learn statistics? How can statistics teachers be confident that students are truly grasping concepts? Writing is one answer to these questions. Join me as we examine the benefits of writing. In addition, we will discuss the hurdles that will need to be overcome by students and teachers to successfully implement writing in a statistics classroom. Please bring your Smartphone, iPad, or an equivalent device.

Friday, December 6, 3:15 - 4:05pm

Newton 201

Probability theory is a very important tool for mathematical and statistical analysis. The aim of this talk is to discuss basic concepts of probability and a probabilistic social conflict model for non-annihilating multi-opponent. In this probabilistic model opponents have no strategic priority with respect to each other. The conflict interaction among the opponents only produces a certain redistribution of common areas of interests. The limiting distribution of the conflicting areas, as a result of infinite conflict interaction for existence space is investigated.

Monday, December 9, 2:30 - 3:20pm

Newton 203

Ranking is a commonly used procedure to evaluate daily life situations, for example in medicine, business, sports, and many other fields. This method is also used in nonparametric statistics, the basic idea here being the ordering of the observations on a more abstract level. The talk will explain the usefulness of this concept in testing procedures, the famous two-sample problem being an important example. A more advanced idea of ordering is used to analyze the "shoulder tip pain" data set that appears in Brunner, Domhof and Langer, 2002. This is a clinical study with 41 patients who had undergone a laparoscopic cholecystectomy and developed shoulder pain after the surgery. The main question here is to test the effectiveness of the treatment for shoulder pain.

Tuesday, December 10, 3:00 - 3:50pm

Newton 203

To get a better understanding of the weather systems, atmospheric scientists and meteorologists are generally interested in having available data spatially interpolated to much finer spatial and temporal grids. Many physical or deterministic models that are used to generate regional or global climate models are able to provide accurate point predictions, but it is very difficult for them to give realistic uncertainty estimates. This calls for the development of statistical models that can produce uncertainty estimates through conditional simulations. In this project, we look at the minute-by-minute atmospheric pressure space-time data obtained from the Atmospheric Radiation Measurement program. We explain how spatial statistics can be used to model such data that are sparse in space but high frequency in time. Due to the interesting local features of the data, we also take advantage of the localization properties of wavelets to capture the local dynamics of the high-frequency data. This method of modeling space-time processes using wavelets produces accurate point predictions with high precision, allows for fast computation, and eases the production of meteorological maps on large spatial and temporal scales.