Lowd, DanielBrophy, Jonathan2023-03-242023-03-242023-03-24https://hdl.handle.net/1794/28085Despite the impressive success of deep-learning models on unstructured data (e.g., images, audio, text), tree-based ensembles such as random forests and gradient-boosted trees are hugely popular and remain the preferred choice for tabular or structured data, and are regularly used to win challenges on data-competition websites such as Kaggle and DrivenData. Despite their impressive predictive performance, tree-based ensembles lack certain characteristics which may limit their further adoption, especially for safety-critical or privacy-sensitive domains such as weather forecasting or predictive medical modeling. This dissertation investigates the shortcomings currently facing tree-based ensembles---lack of explainable predictions, limited uncertainty estimation, and inefficient adaptability to changes in the training data---and posits that numerous improvements to tree-based ensembles can be made by analyzing the relationships between the training data and the resulting learned model. By studying the effects of one or many training examples on tree-based ensembles, we develop solutions for these models which (1) increase their predictive explainability, (2) provide accurate uncertainty estimates for individual predictions, and (3) efficiently adapt learned models to accurately reflect updated training data. This dissertation includes previously published coauthored material.en-USAll Rights Reserved.gradient-boosted treesinfluence estimationmachine unlearningrandom foreststree ensemblesuncertainty estimationUnderstanding and Adapting Tree Ensembles: A Training Data PerspectiveElectronic Thesis or Dissertation