HKUST

CSIC 5011: Topological and Geometric Data Reduction and Visualization
Spring 2020


Course Information

Synopsis (摘要)

This course is open to graduates and senior undergraduates in applied mathematics, statistics, and engineering, who are interested in learning from data. Students with other backgrounds such as life sciences are also welcome, provided you have certain maturity of mathematics. It will cover wide topics in geometric (principal component analysis and manifold learning, etc.) and topological data reduction (clustering and computational homology group, etc.).
Prerequisite: linear and abstract algebra, basic probability and multivariate statistics, basic stochastic process (Markov chains), convex optimization; familiarity with Matlab, R, and/or Python, etc.

Reference (参考教材)

[pdf download]

Topological Data Analysis for Genomics and Evolution: Topology in Biology. By Raul Rabadan and Andrew J. Blumberg [ Amazon ]

Instructors:

Yuan YAO

Time and Place:

Wednesday 3:00-5:50pm, Zoom Webinar ( link ) and LSK Rm1027

Homework and Projects:

Weekly homeworks (no grader, but I'll read your submissions and give bonus credits), mini-projects, and a final major project. No final exam.
Email: datascience.hw (add "AT gmail DOT com" afterwards)

Schedule (时间表)

Date Topic Instructor Scriber
02/19/2020, Wed Lecture 01: Syllabus, Principal Component Analysis, and Multidimensional Scaling [ syllabus ] [ PCA-MDS in keynote slides ] [ video ]
    [Homework 1]:
  • Homework 1 [pdf]. Just for fun, no grading; but I'll read your submissions and give your bonus credits.
Y.Y.
02/26/2020, Wed Lecture 02: Horn's Parallel Analysis and Random Matrix Theory for PCA (Chap 3: 3) [ slides ] [ video ]
    [Homework 2]:
  • Homework 2 [pdf]. Just for fun, no grading; but I'll read your submissions and give your bonus credits
Y.Y. LI, Zhen
03/04/2020, Wed Lecture 03: Sample Mean as MLE? James-Stein Estimator and Shrinkages (Chap 3: 1-2) [ slides ]
    [Reference]:
  • Comparing Maximum Likelihood Estimator and James-Stein Estimator in R: [ JSE.R ]
    [Homework 3]:
  • Homework 3 [pdf]. Just for fun, no grading; but I'll read your submissions and give your bonus credits
Y.Y.
03/11/2020, Wed Lecture 04: Random Projections, Johnson-Lindenstrauss Lemma, and Applications in Compressed Sensing etc. (Chap 2) [ Lecture04.pdf ]
    [Reference]:
  • Joseph Salmon's lecture on Johnson-Lindenstrauss Theory [ JLlemma.pdf ]
  • Random Projections in Scikit-learn: [ link ]
    [Homework 4]:
  • Homework 4 [pdf]. Just for fun, no grading; but I'll read your submissions and give your bonus credits
Y.Y.
03/18/2020, Wed Lecture 05: SDP Relaxations: Robust PCA, Sparse PCA, and MDS with Uncertainty [ slides ]
    [Homework 5]:
  • Homework 5 [pdf]. Just for fun, no grading; but I'll read and give bonus credits if you submitted.
Y.Y.
03/25/2020, Wed Seminars and Mini-Project 1 [ project1.pdf ]
    [ Title ]: Accelerated Outlier Detection in Low-Rank and Structured Data: Robust PCA and Extensions [ slides ]
  • [ Speaker ]: HanQin CAI, University of California at Los Angeles
  • [ Abstract ]: We study robust PCA for the fully observed setting, which is about separating a low rank matrix L and a sparse matrix S from their sum D=L+S. In this talk, a new algorithm, dubbed accelerated alternating projections, is introduced for robust PCA which significantly improves the computational efficiency of the existing non-convex algorithms. Exact recovery guarantee has been established which shows linear convergence of the proposed algorithm. Empirical performance evaluations confirm the advantage of our algorithm over other state-of-the-art algorithms for robust PCA. Furthermore, we extend our method to the low-rank Hankel matrix, with its application to the spectrally sparse signals.
  • [ Bio ]: HanQin Cai is an Assistant Adjunct Professor in the Department of Math at UCLA. He earned his Ph.D. degree from the University of Iowa in 2018, under guidance of Professor Jian-Feng Cai and Prof. Weiyu Xu. His research interests lie at image processing, data analysis, optimization, and machine learning. Currently, he is focusing the projects of outlier detection and zero-order optimization.
    [ Title ]: Online robust matrix factorization for dependent data streams [ slides ]
  • [ Speaker ]: Hanbaek Lyu, University of California at Los Angeles
  • [ Abstract ]: Online Robust Matrix Factorization (ORMF) algorithms seek to learn a reduced number of latent features as well as outliers from streaming data sets. It is important to understand stability of online algorithms for dependent data streams since these are often generated by Markov chain Monte Carlo (MCMC) algorithms, but rigorous convergence analysis of most online algorithms were limited to independently obtained data samples. In this talk, we propose an algorithm for ORMF and prove its almost sure convergence to the set of critical points of the expected loss function, even when the data matrices are functions of some underlying Markov chain satisfying a mild mixing condition. We illustrate our results through dictionary learning and outlier detection problem for images and networks.
  • [ Bio ]: Hanbaek Lyu is a Hedrick Assistant Professor in the Department of Math at UCLA. He earned his Ph.D. degree from the Ohio State University in 2018, under guidance of Professor David Sivakoff. His research interests lie at probability, combinatorics, complex systems, and machine learning. Recently, he is focusing the projects on online optimization algorithms and dictionary learning problems on networks.
Y.Y.
04/01/2019, Wed Lecture 06: Manifold Learning I: ISOMAP and LLE (with Modified LLE, LTSA) [ slides ]
    [Reference]:
  • [ISOMAP]: Tenenbaum's website on science paper with datasets;
  • [LLE]: Roweis' website on science paper;
  • Zhang, Z. & Wang, J. MLLE: Modified Locally Linear Embedding Using Multiple Weights. [ NIPS 2006 ]
  • Zhang, Z. & Zha, H. (2005) Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM Journal on Scientific Computing. 26 (1): 313-338. [doi:10.1137/s1064827502419154]
    [Matlab]:
  • IsomapR1 : isomap codes by Tennenbaum, de Silva (isomapII.m with sparsity, fast mex with dijkstra.cpp and fibheap.h
  • lle.m : lle with k-nearest neighbors
  • kcenter.m : k-center algorithm to find 'landmarks' in a metric space
Y.Y.
04/08/2020, Wed Lecture 07: Manifold Learning II: Hessian LLE, Laplacian Eigenmap, Diffusion Map, and Stochastic Neighbor Embedding [ slides ]
    [Reference]:
  • Mikhail Belkin & Partha Niyogi, Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering, Advances in Neural Information Processing Systems (NIPS) 14, 2001, p. 586-691, MIT Press [nips link]
  • Donoho, D. & Grimes, C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci U S A. 100:5591 (2003). [doi: 10.1073/pnas.1031596100]
  • R. R. Coifman, S. Lafon, A. B. Lee, M. Maggioni, B. Nadler, F. Warner, and S. W. Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. PNAS 102 (21):7426-7431, 2005 [doi: 10.1073/pnas.0500334102]
  • Nadler, Boaz; Stéphane Lafon; Ronald R. Coifman; Ioannis G. Kevrekidis (2005). "Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker–Planck Operators" (PDF) in Advances in Neural Information Processing Systems (NIPS) 18, 2005.
  • Coifman, R.R.; S. Lafon. (2006). "Diffusion maps". Applied and Computational Harmonic Analysis. 21: 5–30. 10.1016/j.acha.2006.04.006.
  • Stochastic Neighbor Embedding [ .pdf ]
  • Visualizing Data using t-SNE [ .pdf ]
  • A paper that relates SNE to Laplacian Eigenmaps [ .pdf ]
  • A helpful website: How to use t-SNE effectively? [ link ]
    [Matlab]
  • Matlab code to compare manifold learning algorithms [ mani.m ] : PCA, MDS, ISOMAP, LLE, Hessian LLE, LTSA, Laplacian, Diffusion (no SNE!)
Y.Y.
04/15/2020, Wed Lecture 08: Random Walk on Graphs and Spectral Graph Theory: Perron-Frobenius (PageRank), Fiedler (Algebraic Connectivity), Cheeger Inequality (Spectral bi-partition), Lumpability (Spectral Clustering) and Transition Path Theory (Semi-supervised Learning) [ slides ]
Y.Y.
04/22/2020, Wed Lecture 09: Introduction to Topological Data Analysis. [ slides ] [ video ]
    [Reference]:
  • Topological Methods for Exploring Low-density States in Biomolecular Folding Pathways.
    Yuan Yao, Jian Sun, Xuhui Huang, Gregory Bowman, Gurjeet Singh, Michael Lesnick, Vijay Pande, Leonidas Guibas and Gunnar Carlsson.
    J. Chem. Phys. 130, 144115 (2009).
    [pdf][Online Publication][SimTK Link: Data and Mapper Matlab Codes] [Selected by Virtual Journal of Biological Physics Research, 04/15/2009].

  • Structural insight into RNA hairpin folding intermediates.
    Bowman, Gregory R., Xuhui Huang, Yuan Yao, Jian Sun, Gunnar Carlsson, Leonidas Guibas and Vijay Pande.
    Journal of American Chemistry Society, 2008, 130 (30): 9676-9678.
    [link]

  • Single-cell topological RNA-seq analysis reveals insights into cellular differentiation and development.
    Abbas H Rizvi, Pablo G Camara, Elena K Kandror, Thomas J Roberts, Ira Schieren, Tom Maniatis & Raul Rabadan.
    Nature Biotechnology. 2017 May. doi:10.1038/nbt.3854

  • Spatiotemporal genomic architecture informs precision oncology in glioblastoma.
    Lee JK, Wang J, Sa JK, Ladewig E, Lee HO, Lee IH, Kang HJ, Rosenbloom DS, Camara PG, Liu Z, van Nieuwenhuizen P, Jung SW, Choi SW, Kim J, Chen A, Kim KT, Shin S, Seo YJ, Oh JM, Shin YJ, Park CK, Kong DS, Seol HJ, Blumberg A, Lee JI, Iavarone A, Park WY, Rabadan R, Nam DH.
    Nat Genet. 2017 Apr. doi: 10.1038/ng.3806.

  • A Python Implementation of Mapper [ sakmapper ] in single cell data analysis.
  • Single Cell TDA [ scTDA ] with [ tutorial in html ]
  • A Java package for persistent homology and barcodes: Javaplex Tutorial.

  • Persistent Homology Analysis of Biomolecular Data
    Guo-Wei Wei.
    SIAM News 2017

  • Topological Data Analysis Generates High-Resolution, Genome-wide Maps of Human Recombination.
    Pablo G. Camara, Daniel I.S. Rosenbloom, Kevin J. Emmett, Arnold J. Levine, Raul Rabadan.
    Cell Systems. 2016 June. doi: 10.1016/j.cels.2016.05.008.

  • Topology of viral evolution.
    Chan JM, Carlsson G, Rabadan R.
    Proc Natl Acad Sci USA 2013 Oct 29. doi: 10.1073/pnas.1313480110.

  • Robert Ghrist's monograph on applied Topology Elementary Applied Topology

Y.Y.
04/29/2020, Wed Lecture 10: Hodge Theory and Applications: Social Choice, Crowdsourced Ranking, and Game Theory [ slides ] [ video ]
    [ Reference ]:
  • Statistical Ranking and Combinatorial Hodge Theory.
    Xiaoye Jiang, Lek-Heng Lim, Yuan Yao and Yinyu Ye.
    Mathematical Programming, Volume 127, Number 1, Pages 203-244, 2011.
    [pdf][ arxiv.org/abs/0811.1067][ Matlab Codes]

  • Flows and Decompositions of Games: Harmonic and Potential Games
    Ozan Candogan, Ishai Menache, Asuman Ozdaglar, and Pablo A. Parrilo
    Mathematics of Operations Research, 36(3): 474 - 503, 2011
    [arXiv.org/abs/1005.2405][ doi:10.1287/moor.1110.0500 ]

  • HodgeRank on Random Graphs for Subjective Video Quality Assessment.
    Qianqian Xu, Qingming Huang, Tingting Jiang, Bowei Yan, Weisi Lin, and Yuan Yao.
    IEEE Transactions on Multimedia, 14(3):844-857, 2012
    [pdf][ Matlab codes in zip ]

  • Robust Evaluation for Quality of Experience in Crowdsourcing.
    Qianqian Xu, Jiechao Xiong, Qingming Huang, and Yuan Yao
    ACM Multimedia 2013.
    [pdf]

  • Online HodgeRank on Random Graphs for Crowdsourceable QoE Evaluation.
    Qianqian Xu, Jiechao Xiong, Qingming Huang, and Yuan Yao
    IEEE Transactions on Multimedia, 16(2):373-386, Feb. 2014.
    [pdf]

  • Analysis of Crowdsourced Sampling Strategies for HodgeRank with Sparse Random Graphs
    Braxton Osting, Jiechao Xiong, Qianqian Xu, and Yuan Yao
    Applied and Computational Harmonic Analysis, 41 (2): 540-560, 2016
    [ arXiv:1503.00164 ] [ ACHA online ] [Matlab codes to reproduce our results]

  • False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking
    Qianqian Xu, Jiechao Xiong, Xiaochun Cao, Yuan Yao
    Proceedings of The 33rd International Conference on Machine Learning (ICML), New York, June 19-24, 2016.
    [ arXiv:1605.05860 ] [ pdf ] [ supplementary ]

  • Parsimonious Mixed-Effects HodgeRank for Crowdsourced Preference Aggregation
    Qianqian Xu, Jiechao Xiong, Xiaochun Cao, Yuan Yao
    ACM Multimedia Conference (ACMMM), Amsterdam, Netherlands, October 15-19, 2016.
    [ arXiv:1607.03401 ] [ pdf ]

  • HodgeRank with Information Maximization for Crowdsourced Pairwise Ranking Aggregation
    Qianqian Xu, Jiechao Xiong, Xi Chen, Qingming Huang, Yuan Yao
    The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), New Orleans, Louisiana, USA, February 2–7, 2018.
    [ arXiv:1711.05957 ] [ Matlab Source Codes ]

  • From Social to Individuals: a Parsimonious Path of Multi-level Models for Crowdsourced Preference Aggregation
    Qianqian Xu, Jiechao Xiong, Xiaochun Cao, Qingming Huang, Yuan Yao
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(4):844-856, 2019. Extended from MM'16 in [ arXiv:1607.03401 ].
    [ arXiv:1804.11177 ] [ doi: 10.1109/TPAMI.2018.2817205 ][ GitHub source]

  • Professor Don Saari: [ UCI homepage ] [ Book Info: Disposing Dictators, Demstifying Voting Paradoxes ] [ Amazon link ]
Y.Y.
05/06/2020, Wed Lecture 11: Seminars.
    [Title]: Robust Statistics and Generative Adversarial Networks [ slides ] [ video ]
    [Reference]:
  • Chao Gao, Jiyi Liu, Yuan Yao, & Weizhi Zhu, Robust Estimate and Generative Adversarial Networks, ICLR 2019. [arXiv:1810.02030]
  • Chao Gao, Yuan Yao, & Weizhi Zhu, Generative Adversarial Nets for Robust Scatter Estimation: A Proper Scoring Rule Perspective. [ arXiv:1903.01944 ]

    [ Title ]: Spectral methods for latent variable models [ slides ]
    [ Speaker ]: Dr. WANG, Kaizheng, Princeton University and Columbia University
    [ Abstract ] Latent variable models lay the statistical foundation for data science problems with unstructured, incomplete and heterogeneous information. Spectral methods extract low dimensional geometric structures for downstream tasks in a computationally efficient way. Despite their conceptual simplicity and wide applicability, theoretical understanding is lagging far behind and that hinders development of principled approaches. In this talk, I will first talk about the bias and variance of PCA, and apply the results to distributed estimation of principal eigenspaces. Then I will present an $\ell_p$ theory of eigenvector analysis that yields optimal recovery guarantees for spectral methods in many challenging problems. The results find applications in dimensionality reduction, mixture models, network analysis, recommendation systems, ranking and beyond.
    [ Bio ] Kaizheng Wang got his PhD Degree in Operations Research and Financial Engineering at Princeton University and will join Columbia University as Assistant Professor in the fall of 2020. His research interests lie at the intersection of statistics, machine learning and optimization, with special focus on development and analysis of efficient algorithms for unsupervised learning.
Y.Y.
05/13/2020, Fri Lecture 12: Final Project. [ project2.pdf ] [ video ]
    [ Title ]: Clustering via Uncoupled REgression (CURE) [ slides ]
    [ Speaker ]: Dr. WANG, Kaizheng, Princeton University and Columbia University
    [ Abstract ] In this talk, we first consider a canonical clustering problem where one receives unlabeled samples drawn from a balanced mixture of two elliptical distributions and aims for a classifier to estimate the labels. Many popular methods including PCA and k-means require individual components of the mixture to be somewhat spherical, and perform poorly when they are stretched. To overcome this issue, we propose a non-convex program seeking for an affine transform to turn the data into a one-dimensional point cloud concentrating around -1 and 1, after which clustering becomes easy. Our theoretical contributions are two-fold: (1) we show that the non-convex loss function exhibits desirable landscape properties as long as the sample size exceeds some constant multiple of the dimension, and (2) we leverage this to prove that an efficient first-order algorithm achieves near-optimal statistical precision even without good initialization. We also propose a general methodology for multi-class clustering tasks with flexible choices of feature transforms and loss objectives.
  • Paper: [ arXiv:2003.09960 ]
Y.Y.

Datasets (to-be-updated)


by YAO, Yuan.
© 2019 GitHub, Inc. Terms Privacy Security Status Help Contact GitHub Pricing API Training Blog About