Validity, reliability, and significance : empirical methods for NLP and data science (Record no. 34116)

000 -LEADER
fixed length control field a
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 250611b xxu||||| |||| 00| 0 eng d
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
International Standard Book Number 9783031570643
082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER
Classification number 006.312
Item number RIE
100 ## - MAIN ENTRY--PERSONAL NAME
Personal name Riezler, Stefan
245 ## - TITLE STATEMENT
Title Validity, reliability, and significance : empirical methods for NLP and data science
250 ## - EDITION STATEMENT
Edition statement 2nd ed.
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT)
Name of publisher, distributor, etc Springer,
Date of publication, distribution, etc 2024.
Place of publication, distribution, etc Cham :
300 ## - PHYSICAL DESCRIPTION
Extent xvii, 168 p. ;
Other physical details ill., (some col.),
Dimensions 25 cm.
365 ## - TRADE PRICE
Price amount 39.99
Price type code
Unit of pricing 100.40
504 ## - BIBLIOGRAPHY, ETC. NOTE
Bibliography, etc Include Bibliography and Reference.
520 ## - SUMMARY, ETC.
Summary, etc Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model's performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and machine learning prediction in the fields of NLP and data science.Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratio of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data.This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and data science. The book is self-contained, with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Machine learning
Topical term or geographic name as entry element Artificial intelligence
Topical term or geographic name as entry element Machine learning
Topical term or geographic name as entry element Natural language processing Research
700 ## - ADDED ENTRY--PERSONAL NAME
Personal name Hagmann, Michael
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Source of classification or shelving scheme
Item type Books
Holdings
Withdrawn status Lost status Source of classification or shelving scheme Damaged status Not for loan Permanent location Current location Date acquired Source of acquisition Cost, normal purchase price Full call number Barcode Date last seen Koha item type
          DAU DAU 2025-05-26 KB 4015.00 006.312 RIE 035619 2025-06-11 Books

Powered by Koha