Economics
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9874
This is the collection for the University of Waterloo's Department of Economics.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Browsing Economics by Author "Chen, Tao"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Applications of Machine Learning on Econometrics for Two-stage Regression, Bias-adjusted Inference with Unobserved Confounding, and Test for High Dimensionality(University of Waterloo, 2024-08-19) Xu, Wenzuo; Chen, TaoNonparametric approaches have been extensively studied and applied when no assumption is made regarding the model specification. More generally, a sieve can be constructed as a collection of subsets of finite-dimensional approximating parameter spaces, over which the target function is estimated by an optimization of fitting without demanding a parametric specification. Although the concept of sieves is devised in such a general way, classic sieve estimation in literature has been mostly focusing on single-layer approximations. When the target functions are of intricate patterns, however, these single-layer estimators show limited capability despite allowance for data-generated sieve bases, whereas characterizing different attributes of the target functions progressively through multiple layers is often more sensible. Deep neural networks (DNNs) offer a multi-layer extension of the traditional sieves by modelling the connections among variables through data transformations from one layer to another. DNNs have a larger freedom than the single-layer ones in increasing the sieve complexity to ensure consistent estimation while maintaining a relatively simple structure in each layer for feasible estimation. This thesis contains three chapters developing methodologies and motivating applications of DNNs on Econometrics for two-stage regression, bias-adjusted inference with unobserved confounding, and test for high dimension.Item Essays on Empirical likelihood for Heaviness Estimation, Outlier Detection and Clustering(University of Waterloo, 2024-04-24) Zhang, Zhuojing; Chen, TaoEmpirical likelihood (EL) is a non-parametric likelihood method of inference. There are a large number of studies about the extensions and applications of EL. Most studies discuss the EL ratio for constructing confidence regions and testing hypotheses, while this thesis focuses on the EL weight assigned to each observation in the dataset by the EL ratio function. This thesis contains three chapters on studying the behaviour and application of EL weights. Specifically, chapter 1 provides a novel approach based on the EL weights to estimate a threshold that separates the bulk part and tail part of datasets of datasets with a heavy-tailed histogram. Because the transition between the bulk and tail parts can not be fully disjointed in many cases, we allow the threshold to be a random variable instead of a fixed number. In addition, the threshold is relative to a benchmark since heaviness is a relative concept. In Chapter 2, we focus on outlier detection. We develop an unsupervised method based on EL to identify outliers. In particular, we calculate the EL weights through the EL ratio function with the bootstrap mean constraint and show that the EL weights have different behaviours for datasets with and without outliers. Additionally, the EL weights provide a measure of outlierness for all observations, which might reduce the cost of time. In Chapter 3, I consider a clustering algorithm based on the EL weights. Clustering is an unsupervised method that aims to group unlabeled datasets based on their similarities. Numerous clustering methods have been proposed. The performance of these methods is typically related to the characteristics of the dataset in the specific applications. The proposed EL weights based clustering algorithm is available to work on datasets with outliers. Moreover, it might suggest the number of clusters for well-separated clusters.Item Essays on Portfolio Selection, Continuous-time Analysis, and Market Incompleteness(University of Waterloo, 2023-04-05) Li, Yixuan; Chen, TaoThis thesis consists of three self-contained essays evaluating topics in portfolio selection, continuous-time analysis, and market incompleteness. The two opposing investment strategies, diversification and concentration, have often been directly compared. Despite the less debate regarding Markowitz's approach as the benchmark for diversification, the precise meaning of concentration in portfolio selection remains unclear. Chapter 1, coauthored with Jiawen Xu, Kai Liu, and Tao Chen, offers a novel definition of concentration, along with an extreme value theory-based estimator for its implementation. When overlaying the performances derived from diversification (in Markowitz's sense) and concentration (in our definition), we find an implied risk threshold, at which the two polar investment strategies reconcile -- diversification has a higher expected return in situations where risk is below the threshold, while concentration becomes the preferred strategy when the risk exceeds the threshold. Different from the conventional concave shape, the estimated frontier resembles the shape of a seagull, which is piecewise concave. Further, taking the equity premium puzzle as an example, we demonstrate how the family of frontiers nested inbetween the estimated curves can provide new perspectives for research involving market portfolios. Parametric continuous-time analysis for stochastic processes often entails the generalization of a predefined discrete formulation to a continuous-time limit. However, unknown convergence rates of the frequency-dependent parameters can destabilize the continuous-time generalization and cause modelling discrepancy, which in turn leads to unreliable estimation and forecast. To circumvent this discrepancy, Chapter 2, coauthored with Tao Chen and Renfang Tian, proposes a simple solution based on functional data analysis and truncated Taylor series expansions. It is demonstrated through a simulation that our proposed method is superior in both fitting and forecasting continuous-time stochastic processes compared with parametric methods that encounter troubles uncovering the true underlying processes. When the markets are incomplete, perfect risk sharing is impossible and the law of one price no longer guarantees the uniqueness of the stochastic discount factor (SDF), resulting in a set of admissible SDFs, which complicates the study of financial market equilibrium, portfolio optimization, and derivative securities. Chapter 3, coauthored with Tao Chen, proposes a discrete-time econometric framework for estimating this set of SDFs, where the market is incomplete in that there are extra states relative to the existing assets. We show that the estimated incomplete market SDF set has a unique boundary point, and shrinks to this point only when the market completes. This property allows us to develop a novel measure for market incompleteness based upon the Wasserstein metric, which estimates the least distance between the probability distributions of the complete and incomplete market SDFs. To facilitate the parametrization of market incompleteness for implementation, we then consider in detail a continuous-time framework, in which the incompleteness specifically arises from stochastic jumps in asset prices, and we demonstrate that the theoretical results developed under the discrete-time setting still hold true. Furthermore, we study the evolution of market incompleteness in four of the world's major stock markets, namely those in China, Japan, the United Kingdom, and the United States. Our findings indicate that an increase in market incompleteness is usually caused by financial crises or policy changes that raise the likelihood of unanticipated risks.Item On Functional Data Analysis: Methodologies and Applications(University of Waterloo, 2020-05-04) Tian, Renfang; Chen, TaoIn economic analyses, the variables of interest are often functions defined on continua such as time or space, though we may only have access to discrete observations -- such type of variables are said to be ``functional'' (Ramsay, 1982). Traditional economic analyses model discrete observations using discrete methods, which can cause misspecification when the data are driven by functional underlying processes and further lead to inconsistent estimation and invalid inference. This thesis contains three chapters on functional data analysis (FDA), which concerns data that are functional in nature. As a nonparametric method accommodating functional data of different levels of smoothness, not only does FDA recover the functional underlying processes from discrete observations without misspecification, it also allows for analyses of derivatives of the functional data. Specifically, Chapter 1 provides an application of FDA in examining the distribution equality of GDP functions across different versions of the Penn World Tables (PWT). Through our bootstrap-based hypothesis test and applying the properties of the derivatives of functional data, we find no support for the distribution equality hypothesis, indicating that GDP functions in different versions do not share a common underlying distribution. This result suggests a need to use caution in drawing conclusions from a particular PWT version, and conduct appropriate sensitivity analyses to check the robustness of results. In Chapter 2, we utilize a FDA approach to generalize dynamic factor models. The newly proposed generalized functional dynamic factor model adopts two-dimensional loading functions to accommodate possible instability of the loadings and lag effects of the factors nonparametrically. Large sample theories and simulation results are provided. We also present an application of our model using a widely used macroeconomic data set. In Chapter 3, I consider a functional linear regression model with a forward-in-time-only causality from functional predictors onto a functional response. In this chapter, (i) a uniform convergence rate of the estimated functional coefficients is derived depending on the degree of cross-sectional dependence; (ii) asymptotic normality of the estimated coefficients can be obtained under proper conditions, with unknown forms of cross-sectional dependence; (iii) a bootstrap method is proposed for approximating the distribution of the estimated functional coefficients. A simulation analysis is provided to illustrate the estimation and bootstrap procedures and to demonstrate the properties of the estimators.