STM2015 & CSM2015

Technical Program

Location: Conference Room I (2nd floor), ISM

 

July 13 (Monday)

Time

10:15 – 10:30am

 

Opening

 

Morning: Tutorial: Stable and Levy process

Time

10:30 – 12:00pm

Speaker

Prof. Makoto Maejima (Keio U)

Title

Stable distributions, stable processes and Levy processes

Abstract

In this tutorial talk, I will explain basic facts on stable distributions, stable processes and Levy processes. Most parts of the talk are from the following books:

[1] G. Samorodnitsky and M.S. Taqqu, Stable Non-Gaussian Random Processes: Sto-chastic Models with In
nite Variance, 1994.

[2] A. Janicki and A. Weron, Simulation and Chaotic Behavior of
-Stable Stochastic Processes, 1994.

[3] V.V. Uchaikin and V.M. Zolotarev, Chance and Stability: Stable Distributions and their Applications, 1999.

[4] J. Nolan, Bibliography on stable distributions, processes and related topics, Original version August 3, 2003, Revised June 22, 2015,http://academic2.american.edu/jpnolan/stable/StableBibliography.pdf

[5] K. Sato, Levy Processes and In
nitely Divisible Distributions, First edition 1999,Revised edition 2013.

 

Afternoon I: Research Talk: Special Topics in Optimization and Inference

Time

13:30 – 14:30pm

Speaker

Prof. Samuel Kou (Harvard U)

Title

Big data, Google and disease detection: the statistical story

Abstract

Big data collected from the internet have generated significant interest in not only the academic community but also industry and government agencies. They bring great potential in tracking and predicting massive social activities. We focus on tracking disease epidemics in this talk. We will discuss the applications, in particular Google Flu Trends, some of the fallacy and the statistical implications. We will propose a new model that utilizes publicly available online data to estimate disease epidemics. Our model outperforms all previous real-time tracking models for influenza epidemics at the national level of the US. We will also draw some lessons for big data applications

Video

 

Time

14:30 – 15:30pm

Speaker

Prof. Frederick Kin Hing Phoa (Institute of Statistical Science)

Title

A Swarm Intelligence Based (SIB) Method and Its Applications to
Statistics and Spatial Clustering

 

Afternoon II : Research Talk and Tutorial: Fractional Derivatives and StableProcesses

Time

16:00 – 17:30pm

Speaker

Dr. Malcom Egan (Agent Technology Center)

Title

Symmetric Alpha-Stable Random Variables, Fractional Calculus, and the Fourier-Mellin Triangle

Abstract

The characteristic function (the Fourier transform of the density function) plays a key role in the theory of alpha-stable distributions. An alternative perspective, first investigated by Zolotarev for stable distributions in a 1957 paper, is based on the Mellin transform of the density function. This Mellin-transform perspective has intimate ties to the fractional calculus developed by Riemann, Liouville, Marchaud and Riesz. In this talk, we focus on the connections via fractional calculus between symmetric alpha-stable distributions, their characteristic functions, and Mellin transforms of their densities. We review and make explicit the recent use of this combination of perspectives by Di Paolo et al to develop integral density representations useful for numerical approximation. We also show additional important properties using these techniques, including expressions for fractional moments and probability metrics.

Slides      Video

 


 

July 14 (Tuesday)

Morning: Research Talk and Tutorial: Non-stationary stable process and calibration

Time

9:00 – 10:30pm

Speaker

Prof. Nourddine Azzaoui (Université Blaise Pascal)

Title

Identification and calibration problems in some non-stationary alpha-stables processes

Abstract

Thanks to the spectacular progress of modern computing, spectral analysis of stochastic processes knew an exceptional advances. Researches in this do- main are motivated by the fact that spectral theory provides a powerful tool to explore statistical properties of stochastic processes. Indeed, these techniques proved their efficiency in the survey of several natural phenomena: in signal or image processing, in econometrics forecasting, astronomy... This talk concerns an important class of stochastic processes having infinite second order moments: it is about symmetric alpha-stable processes. The stationary case has been widely studied in the literature. A general concept of spectral characterisation of non stationary processes is discussed in the talk of G. W. Peters. He shows that under a suitable condition on the covariation additivity, we can characterize the law of the alpha-stables processes by a complex bimeasure as it is the case of second order processes. In this talk we give details about an efficient estimation of the bi-spectral density (of the bimeasure) in two different ways. In the first part of the talk, we use the moments techniques to give an asymptotically unbiased estimator of the bi-spectral density from the continuous time observations of the process. We also study the statistical properties and consistency of this estimator. In the second part, we enhance the results concerning the continues time estimations to give an asymptotically unbiased estimators from discrete observations of the process.

(Slides)*      (Video)*      *after publishing

 

Time

10:30 – 12:00pm

Speaker

Dr. Gareth W. Peters (UCL)

Title

Spectral Characterization of the Family of alpha-Stable Processes that Generalize Gaussian Process Models

(Slides)*      (Video)*      *after publishing

 

Afternoon I: Methodology and Application: Application to stable process and wireless communication

Time

13:30 – 15:30pm

Speaker

Prof. Laurent Clavier (Telecom Lille 1)

Title

Stable processes  and  wireless communications

Abstract

Telecommunications are undergoing a significant and important revolution: sensor networks, 5G, cognitive radio, etc. In less than a decade it is not unreasonable to speculate that autonomous devices will build their own network: they will adapt to new environments, change their configuration, adapt to disappearing or appearing nodes; they will have to track the best communications opportunities and ensure reliability with strict energy constraints. However a complete redefinition of communication rules is needed to face the main foreseen challenges, interference and adaptability, with a hard energy constraint.We will show why new mathematical models are needed. Alpha-stable distributions are one attractive solution to represent interference. However it raises challenges and we will discuss some of them: receiver design, space/time dependence models and channel capacity.

Slides      Video

 

Afternoon II : Application: Wireless communication

Time

16:00 – 17:00pm

Speaker

Dr. Malcom Egan (Agent Technology Center)

Title

Vector Quantization Codebook Design for Multiuser MIMO

Abstract

In this talk, we focus on wireless cellular networks where a base station with multiple antennas transmits to multiple users simultaneously; known as multiuser MIMO (MU-MIMO). This setup plays an important role in current standards and forms the basis for advanced interference mitigation techniques, such as network MIMO. A key aspect of MU-MIMO is that vector channel state measurements are fed back from each user in order to optimize transmissions from the base station to the users. In practice, there are strong constraints on the capacity of the feedback links between users and the base station. As such, an important problem is to design vector codebooks to reduce the effect of errors due to quantization (i.e. compression). We show how this problem can be formulated within frame theory and propose codebooks based on unitary representations of groups. 

Slides      Video

 

Time

17:00 – 18:00pm

Speaker

Prof. Tor Andre Myrvoll (SINTEF)

Title

Applications of non-parametric copulas to wireless communications

Abstract

Copulas provide us with a powerful framework for modeling multivariate dependency structures. Parametric copulas like those from the Gaussian or Archimedean family, are well known and interpretable, but for greater modeling flexibility, non-parametric copulas can be considered. In this work we show how non-parametric copulas can be used to obtain empirical dependency models of fading characteristics.

Slides

 


 

July 15 (Wednesday)

Morning: Research Talk and Tutorial: Spatial counting process for seismicity

Time

9:00 – 10:00pm

Speaker

Prof. Yosihiko Ogata (ERI, U. of Tokyo and ISM, ROIS)

Title

Point process modelling: Tutorial with some topics in statistical seismology

Abstract

The occurrence times of earthquakes can be considered to be a point process, and suitable modelling of the conditional intensity functions of point process is useful for the investigation of various statistical features in seismic activity. This talk introduces models and statistical methods for analysis of seismicity data, which could be also used for various data in other research fields. Some further information is included in my home page: http://www.ism.ac.jp/~ogata/. However, I avoid duplication with my@topics at STM2014,

http://www.ismvideo.org/STM2014/Yosi.html in the present talk. 

Slides

 

Time

10:00 – 11:00pm

Speaker

Prof. Shiyong Zhou (Peking U)

Title

Earthquake Hazard Prediction Based on Seismicity Simulation

Abstract

Seismicity over 10000 years in Western Sichuan of China has been simulated based on the mechanical synthetic seismicity model we developed. And the maximum magnitude and recurrent time of strong earthquakes in the interested region can be estimated. According to the analysis of the simulated synthetic seismic catalogue, the occurrence of strong earthquakes with M s ≥ 7.0 in the whole region of Western Sichuan is rather random, very close to the Poisson process with seismic rate 0.0454/year, which means it is reasonable to estimate the regional earthquake risk with Poisson model in Western Sichuan. However, the occurrence of strong earthquakes with M s ≥ 7.0 on the individual faults of Western Sichuan is far from Poisson process and could be predicted with a time-dependent prediction model. The fault interaction matrices and earthquake transfer possibility matrices among the faults in Western Sichuan have been calculated based on the analysis of the simulated synthetic catalogues. We have also calculated the static change in Coulomb failure stress(CFS)on one fault induced by a strong earthquake on another fault in Western Sichuan to discuss the physical implications of the earthquake transfer possibility matrices inferred from the synthetic catalogue.

Slides      Video

 

Time

11:00 – 12:00pm

Speaker

Prof. Sung Nok Chiu (Hong Kong Baptist University)

Title

Model-free tests for spatial point patterns

Abstract

Data in form of point patterns are often encountered in spatial statistics, e.g. locations of a certain species of plants and locations of mines.  Point pattern analysis is often carried out by assuming the given pattern as a realisation of some spatial point process model, such as the Poisson process or the Gibbs process, and then applying parametric methods for model fitting and testing.  However, certain crucial questions that should be addressed before modeling are general and are model-free, such as whether the distribution of the model from which the data are generated is translationally invariant (stationary) and rotationally invariant (isotropic) or whether two given patterns (sampling from e.g. a normal tissue and a cancerous tissue) have the same distribution.  In this talk we review the existing tests for these questions, discuss their weaknesses, and propose new model-free testing procedures that do not suffer from such weaknesses.  Simulation results and case studies will be shown to demonstrate the advantages of our proposed tests over the tests in the literature.

 

Afternoon I: Methodology and Application: Counting process

Time

13:30 – 14:30pm

Speaker

Prof. Jiancang Zhuang (ISM)

Title

 (TBA)

 

Time

14:30 – 15:30pm

Speaker

Prof. Zengping Wen (Institute of Geophysics, China Earthquake Administration)

Title

Earthquake damage estimation, Seismic hazard analysis and Earthquake risk analysis

Abstract

The estimation of probabilistic future losses is a matter of increasing interest to those concerned with public administration in earthquake-prone regions and those who manage large numbers of buildings, and they are of particular concern to the insurance and reinsurance companies which insure those buildings. A lot of effort has been devoted to the problem of how to devise reliable estimates, given the large uncertainties in the pattern of past earthquake occurrence, both in time and space, and our limited understanding of the behavior of the vulnerable elements of the built environment.

This presentation introduces earthquake damage phenomena, engineering characteristics of strong ground motion. The two components comprising the basic structure of a loss estimation study are presented. One component, the seismic hazard analysis, involves the identification and quantitative description of earthquakes to be used as a basis for evaluating losses. The probabilistic seismic hazard analysis consists four main steps: 1) determine the seismic provinces or belts where future earthquakes may occur according to the law of seismicity; 2) Determine the parameters of seismicity for each province and source; 3) select the parameters for seismic hazard, peak acceleration, response spectrum and/or intensity, and determine their attenuation laws suitable for the region; 4) determine the seismic hazard at the site. The second component, the vulnerability analysis, entails analysis of the vulnerability of buildings to earthquakes and the losses that may result from this damage. A key step is to establish the relationships among intensity ground shaking, resulting damage, and associated losses, expressed by means of a damage probability distribution. The information assemble from these two components (seismic hazard analysis and vulnerability analysis) is combined to produce the probable distribution of losses for all possible earthquake events in a given time period and thus to determine the earthquake risk. The earthquake risk may be measured in terms of expected economic loss, or in terms of numbers of lives lost or the extent of physical damage to property, where appropriate measures of damage are available. The methodology of probabilistic earthquake risk analysis to calculate of the probable levels of losses occurring from all sizes of earthquake over a period of time, e.g. as expected losses per year or annualized loss estimation is presented. Risk analysis can be used to estimate number of buildings destroyed, lives lost and total financial costs over a given period of time.

Keyword: seismic hazard analysis, vulnerability analysis, earthquake risk analysis

Slides      Video

 

Afternoon II : Application: Bayesian separation

Time

16:00 – 17:00pm

Speaker

Prof. Jen-Tzung Chien (National Chiao Tung U)

Title

Bayesian source separation

Abstract

This talk will survey the state-of-art machine learning approaches for model-based source separation with applications for speech separation, instrumental music separation and singing-voice separation. The traditional separation approaches assume the static mixing condition which could not catch the underlying dynamics in source signals and sensor networks. The uncertainty of system parameters may not be precisely characterized so that the robustness against adverse environments was not guaranteed. The temporal structures in mixing system as well as source signals may not be properly captured. The model complexity or the dictionary size we assume may not be fitted to the true one in source signals. With the remarkable advances in machine learning algorithms, the issues of underdetermined mixtures, nonstationary mixing condition, ill-posed condition and model regularization have been resolved by introducing the solutions of nonnegative matrix factorization, online learning, Gaussian process, sparse learning, dictionary learning, Bayesian inference and model selection. This talk will present how these algorithms are connected and why they work for source separation particularly in speech and music applications.

Slides      Video


Poster session

Time

17:00 – 18:30pm

Location

Seminar room 1 (on the 3rd floor, D305)

 


 

July 16 (Thursday)

Morning: Tutorial and Application: High dimensional inference with Monte Carlo and tracking

Time

9:00 – 10:00pm

Speaker

Prof. Francois Septier (Telecom Lille 1)

Title

Langevin and Hamiltonian based Sequential MCMC for Efficient Bayesian Filtering in High-dimensional Spaces

Abstract

Nonlinear non-Gaussian state-space models arise in numerous applications in statistics and signal processing. In this context, one of the most successful and popular approximation techniques is the Sequential Monte Carlo (SMC) algorithm, also known as particle filtering. Nevertheless, this method tends to be inefficient when applied to high dimensional problems. In this work, we focus on another class of sequential inference methods, namely the Sequential Markov Chain Monte Carlo (SMCMC) techniques, which represent a promising alternative to SMC methods. After providing a unifying framework for the class of SMCMC approaches, we propose novel efficient strategies based on the principle of Langevin diffusion and Hamiltonian dynamics in order to cope with the increasing number of high-dimensional applications. Simulation results show that the proposed algorithms achieve significantly better performance compared to existing algorithms.  (Joint work with Dr. Gareth Peters - http://arxiv.org/abs/1504.05715)

Slides      Video

 

Time

10:00 – 11:00pm

Speaker

Prof. Norikazu Ikoma (Kyushu Institute of Technology)

Title

Deformable Target Tracking over Infrared Camera Video in Coarse Resolution with Possible Frame-Out of the target by Particle Filter

Abstract

Motion estimation of deformable target over infrared camera video in coarse resolution with possible frame-out of the target by an elaborated particle filter has been investigated. State space modeling with a state consisting of location factor of the target and its shape factor has been employed, where the shape factor is a set of pixels over the image frame with connectivity property, and the location factor specifies the position of center of gravity of

the set of pixels over the image plane. The location factor is in

real-valued vector form, while the shape factor is formed by a set of pixels in discrete coordinates over the digital image. After having intensity enhancement of the dim image captured by infrared camera, particle filter algorithm has been applied with system model specifying the smooth motion of the target and its deformation, and likelihood model to evaluate relatively high intensity values in the shaped region of the target. Note that for the system model, we have employed a mixture model of second order difference equation for smooth motion and uniform distribution over the image plane to cope with abrupt change of the location due to lack of frame rate in the video. For the smooth change of the target shape, we need to utilize a Markov Chain Monte Carlo move on the set of connected pixels.Keywords: visual tracking, deformable object, particle filter, infrared camera, Markov Chain Mote Carlo.

 

Time

11:00 – 12:00pm

Speaker

Prof. Yoshinori Kawasaki (ISM)

Title

Scale Mixture of Skewed Kalman Filter and Its Application

Abstract

Kalman filter is an essential tool to yield the log-likelihood of linear Gaussian time series models via prediction error decomposition. One of its popular use is to decompose an observed time series into several unobserved component time series. As an extension of linear state space model, Naveau et al. (2005) proposed skewed state space model that allows one of the decomposed time series to follow some asymmetric distribution. Assumed class of distribution is called closed skew normal (CSN) which includes normal distribution as a special case. The extended algorithm is called skewed Kalman filter (SKF). Kim et al. (2014) illustrates scale mixture of SKF to accommodate a skewed and heavy-tailed distributed component, but their scheme seems to lack practical guidelines for actual implementation. In this presentation, we show fixed interval smoothing algorithm for SKF which is seemingly missed in the literature and apply it to daily commodity future time series obtained from Tokyo Commodity Exchange. Next, we explore a practically feasible algorithm to estimate scale mixture of SKF especially when the true mixture structure and mixture rates are unknown. [This is the joint work with Mr. Daisuke Kurisu, a master course student at Graduate School of Economics, the University of Tokyo.]

Slides      Video

 

Afternoon I: Application: Financial modeling and insurance

Time

13:30 – 15:00pm

Speaker

Dr. Pavel V. Shevchenko (CSIRO)

Title

Modelling Annuity Portfolios and Longevity Risk with

Extended CreditRisk+

Abstract

Using an extended version of the credit risk model CreditRisk+, we develop a flexible framework to estimate stochastic life tables and to model annuity portfolios, including actuarial reserves. Deaths are driven by common stochastic latent risk factors which may be interpreted as death causes like neoplasm, circulatory diseases or idiosyncratic components. Our approach provides an efficient, numerically stable algorithm for an exact calculation of the one-period loss distribution where various sources of risk are considered. As required by many regulators, we can then derive risk measures for the one-period loss distribution such as value-at-risk and expected shortfall. In particular, our model allows stress testing and, therefore, offers insight into how certain health scenarios influence annuity payments of an insurer. Such scenarios may include improvement in health treatments or better medication. Using publicly available data, we provide estimation procedures for model parameters including classical approaches as well as MCMC methods. We conclude with a real world example using Australian death data.

Slides      Video

 

Afternoon II : Application: Financial modeling and insurance

Time

15:30 – 16:30pm

Speaker

Prof. Guillaume Bagnarosa (ESC Rennes School of Business)

Title

About Risk Neutral Uncertainty

Abstract

The pricing of numerous financial derivatives among which exotic options and variance swaps rests on the hypothesis of complete markets which relies in turn on the existence of a continuum of vanilla option prices and more importantly on the quality of this information. During this talk we will highlight several micro-structure sources of noise which lead undoubtedly to measurement errors and calibration impediments. While industry and academics generally assume that the available asset bid-ask spread mid-price corresponds to the asset fair price, we demonstrate through high frequency data dynamic analysis how misleading such an assumption is and how it relates to another significant source of noise which is arising from the non-synchronicity of the information flows. Finally, we propose a state space model to circumvent this ubiquitous measurement uncertainty and thus improve the quality of the information used for derivatives pricing and models calibration.

(Slides)*      (Video)*      *after publishing

 

Time

16:30 – 17:30pm

Speaker

Mr. Matthew Ames (UCL)

Title

Violations of Uncovered Interest Rate Parity and International Exchange Rate Dependences

Abstract

The uncovered interest rate parity puzzle questions the economic relation existing between short term interest rate differentials and exchange rates. One would indeed expect that the differential of interest rates between two countries should be offset by an opposite evolution of the exchange rate between them, hence ruling out any limited risk profit opportunities. However, it has been shown empirically that this relation is not holding and accordingly has led, over the past two decades, to the reinforcement of a well-known trading strategy in financial markets, namely the currency carry trade. This paper investigates how highly leveraged, mass speculator behaviour affects the dependence structure of currency returns. We propose a rigorous statistical modelling approach using two complementary techniques in order to demonstrate that speculative carry trade volumes are informative in both the covariance and tail dependence of high and low interest rate currency returns, whereas the price based factors previously suggested in the literature hold little explanatory power. We add a new feature to the understanding of the link between the UIP condition and the carry trade strategy, specifically attributed to the large joint exchange rate movements in high and low risk environments.

Slides      Video

 

Time

17:30 – 18:00pm

Speaker

Mr. Lucas Tian Huijie (UCL)

Title

Loss Reserve

Slides      Video

 


 

July 17 (Friday)

Morning: Research Talk and Tutorial: Complex systems for environmental modeling

Time

9:30 – 10:30pm

Speaker

Prof. Yoshiki Yamagata, Dr. Daisuke Murakami (National Institute for Environmental Studies)

Title

A spatiotemporal analysis of participatory sensing data gtweetsh and extreme climate events toward real-time urban risk management

Abstract

Real-time urban climate monitoring provides useful information that can be utilized to help monitor and adapt to extreme events, including urban heatwaves. Typical approaches to the monitoring of climate data include the acquisition of weather station monitoring and also remote sensing via satellite sensors. However, climate monitoring stations are very often distributed spatially in a sparse manner, and consequently, this has a significant impact on the ability to reveal exposure risks due to extreme climates at an intra-urban scale (e.g., street level). Additionally, such traditional remote sensing data sources are typically not received and analyzed in real-time which is often required for adaptive urban management of climate extremes, such as sudden heatwaves. Fortunately, recent social media, such as Twitter, furnishes real-time and high-resolution spatial information that might be useful for climate condition estimation.

  The objective of this study is utilizing geo-tagged tweets (participatory sensing data) for urban temperature analysis. We first detect tweets relating heat (heat-tweets). Then, we study relationships between monitored temperatures and heat-tweets via a statistical model framework based on copula modelling methods. We demonstrate that there are strong relationships between gheat-tweetsh and temperatures recorded at an intra-urban scale, that we reveal in our analysis of Tokyo city and its suburbs. Subsequently, we then investigate the application of gheat-tweetsh informing spatio-temporal Gaussian process interpolation of temperatures as an application example of gheat-tweetsh. We utilize a combination of spatially sparse weather monitoring sensor data, infrequently available MODIS remote sensing data and spatially and temporally dense lower quality geo-tagged twitter data. Here, a spatial best linear unbiased estimation (S-BLUE) technique is applied. The result suggests that tweets provide some useful auxiliary information for urban climate assessment. Lastly, effectiveness of tweets toward a real-time urban risk management is discussed based on the analysis of the results.

 

Time

10:30 – 12:00pm

Speaker

Dr. Ido Nevat (Institute for Infocom Research, A-Star)

Slides      Video

 

Afternoon I: Theory and Methodology: Statistical machine learning

Time

13:30 – 14:30pm

Speaker

Prof. Taiji Suzuki (Tokyo Institute of Technology)

Title

Stochastic Alternating Direction Method of Multipliers for Structured Sparsity

  Abstract
In this talk, we present new stochastic optimization methods that are applicable to a wide range of structured regularizations. Structured regularization is a useful statistical tool to deal with a complicated data structure such as group sparsity, graphical sparsity, and low rank tensor structure. The proposed methods are based on stochastic optimization techniques and Alternating Direction Method of Multipliers (ADMM). ADMM is a general framework for optimizing a composite function, and has a wide range of applications. We propose two types of stochastic variants of ADMM, which correspond to (a) online stochastic optimization and (b) stochastic dual coordinate descent respectively. Both methods require only one or few sample observations at each iteration, and are suitable for large-scale data analysis. It is shown that the online type method (a) yields the minimax optimal convergence rate in an online setting and the stochastic dual coordinate descent type method (b) yields exponential convergence rate in a batch setting.

 

Time

14:30 – 15:30pm

Speaker

Prof. Kenji Fukumizu (ISM)

Title

Kernel Mean Particle Filter with Intractable Likelihoods

Abstract

We propose a kernel method for filtering with a state space model under the condition that the observation model has no explicit density form but sampling is possible.  The standard sequential Monte Carlo filters are not applicable, since they require the functional form of density functions (up to constant).  Using the framework of kernel mean embedding, the proposed method expresses a distribution with weighted samples, for which the Gram matrix computation provides a way of Bayesian computation.  Resampling with the kernel herding is also implemented to improve the filtering accuracy.  The experimental results with €alpha-stable multivariate stochastic volatility models show favorable performance of the proposed method over existing ABC filter in the setting of intractable likelihood, and also in the case of tractable likelihood for high dimensional variables. This work is a collaboration with Motonobu Kanagawa and Yoshimasa Uematsu.

Slides      Video

 

Afternoon II : Methodology and Application: Gaussian processes and state space modeling

Time

16:00 – 17:00pm

Speaker

Prof. Konstatin Markov (Aizu U.)

Title

Probabilistic Dynamic Time Warping based on Gaussian Processes

Abstract

A big problem of the DTW based classification systems is the fact that

the class model - the "template" which is usually one of the training samples, does not account for the with-in class variability. It is reflected by the DTW scores and causes bigger overlap between different classes. Here, we propose a new "template" model based on Gaussian Processes which accounts for this variability and, thus, improves the inter-class separability.

 

Time

17:00 – 17:30pm

Speaker

Prof. Tomoko Matsui (ISM)

Title

Monte Carlo Dynamic Classifier (MCDC) Tool

Abstract

A toolbox developed in Matlab for the estimation, calibration and filtering of very flexible and general Gaussian Process specified state space models is introduced. State-space models are widely used in many areas of science, engineering and economics to model time series and dynamical systems. We develop a toolbox for a complete Bayesian approach to inference and learning (i.e. state estimation and system identification) on nonlinear nonparametric state-space models.

(Slides)*      (Video)*      *after publishing

 

Time

17:30 – 17:45

 

Closing