Jun8 15 hours ago

Not one mention of the EM algorithm, which is, as far as I can understand, is being described here (https://en.m.wikipedia.org/wiki/Expectation%E2%80%93maximiza...). It has so many applications, among which is estimating number of clusters for a Gaussian mixture model.

An ELI5 intro: https://abidlabs.github.io/EM-Algorithm/

  • miki123211 13 hours ago

    > It has so many applications, among which is estimating number of clusters for a Gaussian mixture model

    Any sources for that? As far as I remember, EM is used to calculate actual cluster parameters (means, covariances etc), but I'm not aware of any usage to estimate what number of clusters works best.

    Source: I've implemented EM for GMMs for a college assignment once, but I'm a bit hazy on the details.

    • fleischhauf 5 hours ago

      you are right you still need the number of clusters

      • BrokrnAlgorithm 2 hours ago

        I've been out of the loop for stats for a while, but is there a viable approach for estimating ex ante the number of clusters when creating a GMM? I can think if constructing ex post metrics, i.e using a grid and goodness of fit measurements, but these feel more like brute forcing it

        • lukego 31 minutes ago

          Is the question fundamentally: what's the relative likelihood of each number or clusters?

          If so then estimating the marginal likelihood of each one and comparing them seems pretty reasonable?

          (I mean in the sense of Jaynes chapter 20.)

        • disgruntledphd2 an hour ago

          Unsupervised learning is hard, and the pick K problem is probably the hardest part.

          For PCA or factor analysis, there's lots of ways but without some way of determining ground truth it's difficult to know if you've done a good job.

  • Sniffnoy 13 hours ago

    It does not appear to be what's being described here? Could you perhaps expand on the equivalence between the two if it is?

clircle 17 hours ago

Does any living statistician come close to the level of Donald Rubin in terms of research impact? Missing data analysis, causal inference, EM algorithm, any probably more. He just walks around creating new subfields.

  • selectionbias 6 hours ago

    Also approximate Bayesian computation, principal stratification, and the Bayesian Bootstrap.

  • aquafox 10 hours ago

    Andrew Gelman?

    • nabla9 6 hours ago

      Gelman has contributed to Bayesianism, hierarchial models and Stan is great, but that's not even close to what Rubin has done.

      ps. Gelman was Rubin's doctoral student.

xiaodai 19 hours ago

I don’t know. I find quanta articles very high noise. It’s always hyping something

  • jll29 18 hours ago

    I don't find the language of the article full of "hype"; they describe the history of different forms of imputation from single to multiple to ML-based.

    The table is particularly useful as it describes what the article is all about in a way that can stick to students' minds. I'm very grateful for QuantaMagazine for its popular science reporting.

    • billfruit 9 hours ago

      The Quanta articles usually have a gossipy style and are very low information density.

      • SAI_Peregrinus an hour ago

        They're usually more science history than science. Who did what, when, and a basic ovnrview of why it's important.

  • vouaobrasil 18 hours ago

    I agree with that. I skip the Quanta magazine articles, mainly because the titles seem to be a little to hyped for my taste and don't represent the content as well as they should.

    • amelius 17 hours ago

      Yes, typically a short conversation with an LLM gives me more info and understanding of a topic than reading a Quanta article.

  • MiddleMan5 10 hours ago

    Curious, what sites would you recommend?

TaurenHunter 12 hours ago

Donald Rubin is kind of a modern day Leibniz...

Rubin Causal Model

Propensity Score Matching

Contributions to

Bayesian Inference

Missing data mechanisms

Survey sampling

Causal inference in observations

Multiple comparisons and hypothesis testing

bgnn 8 hours ago

Why not interpolate the missing data points with similar patients data?

This must be about the confidence of the approach. Maybe interpolation would be overconfident too.

SillyUsername 6 hours ago

Isn't this just Monte Carlo, or did I miss something?

userbinator 7 hours ago

It reminds me somewhat of dithering in signal processing.

paulpauper 16 hours ago

why not use regression on the existing entries to infer what the missing ones should be?

  • ivan_ah 15 hours ago

    That would push things towards the mean... not necessarily a bad thing, but presumably later steps of the analysis will be pooling/averaging data together so not that useful.

    A more interesting approach, let's call it OPTION2, would be to sample from the predictive distribution of a regression (regression mean + noise), which would result in more variability in the imputations, although random so might not what you want.

    The multiple imputation approach seems to be a resampling methods of obtaining OPTION2, w/o need to assume linear regression model.

    • stdbrouw 9 hours ago

      Multiple imputation simply means you impute multiple times and run the analysis on each complete (imputed) dataset so you can incorporate the uncertainty that comes from guessing at missing values into your final confidence intervals and such. How you actually do the imputation will depend on the type of variable, the amount of missingness etc. A draw from the predictive distribution of a linear model of other variables without missing data is definitely a common method, but in a state-of-the-art multiple imputation package like mi in R you can choose from dozens.

light_hue_1 18 hours ago

I wish they actually engaged with this issue instead of writing a fluff piece. There are plenty of problems with multiple imputation.

Not the least of which is that it's far too easy to do the equivalent of p hacking and get your data to be significant by playing games with how you do the imputation. Garbage in, garbage out.

I think all of these methods should be abolished from the curriculum entirely. When I review papers in the ML/AI I automatically reject any paper or dataset that uses imputation.

This is all a consequence of the terrible statics used in most fields. Bayesian methods don't need to do this.

  • jll29 18 hours ago

    There are plenty of legit. articles that discuss/survey imputation in ML/AI: https://scholar.google.com/scholar?hl=de&as_sdt=0%2C5&q=%22m...

    • light_hue_1 17 hours ago

      The prestigious journal "Artificial intelligence in medicine"? No. Just because it's on Google scholar doesn't mean it's worth anything. These are almost all trash. On the first page there's one maybe legit paper in an ok venue as far as ML is concerned (KDD; an adjacent field to ML) that's 30 years old.

      No. AI/ML folks don't do imputation on our datasets. I cannot think of a single major dataset in vision, nlp, or robotics that does so. Despite missing data being a huge issue in those fields. It's an antiqued method for an antiqued idea of how statistics should work that is doing far more damage than good.

      • disgruntledphd2 an hour ago

        Ok that's interesting. I profoundly disagree with your tone, but would really like to hear with you regard as good approaches to the problem of missing data (particularly where you have dropout from a study or experiment).

  • parpfish 15 hours ago

    I feel like multiple imputation is fine when you have data missing at random.

    The problem is that data is never actually missing at random and there’s always some sort of interesting variable that confounds which pieces are missing

    • underbiding 11 hours ago

      True true but how do you account for missing data based on variables you care about and those you don't?

      More specifically, how do you determine if the pattern you seem to be identifying is actually related to the phenomenon being measured and not an error in the measurement tools themselves?

      For example, a significant pattern of answers to "Yes / No: have you ever been assaulted?" are blank. This could be (A), respondents who were assaulted are more likely to leave it blank out of shame or (B) someone handling the spreadsheet accidentally dropped some rows in the data (because lets be serious here, its all spreadsheets and emails...).

      While you could say that (B) should be theoretically "more truly random", we can't assume that there isn't a pattern to the way those rows were dropped (i.e. a pattern imposed on some algorithm that bugged out and dropped those rows).

      • Xcelerate 4 hours ago

        > how do you determine if the pattern you seem to be identifying is actually related to the phenomenon being measured and not an error in the measurement tools themselves?

        If the “which data is missing” information can be used be to compress the data that isn’t missing further than it can be compressed be alone, then the missing data is missing at least in part due to the phenomenon being measured. Otherwise, it’s not.

        We’re basically just asking if K(non-missing data | which data is missing) < K(non-missing data). This is uncomputable so it doesn’t actually answer your question regarding “how to determine”, but it does provide a necessary and sufficient theoretical criteria.

        A decent practical approximation might be to see if you can develop a model that predicts the non-missing data better when augmented with the “which information is missing” information than via self-prediction. That could be an interesting research project...

        • parpfish an hour ago

          There’s already a bunch of stats research on this problem. Some useful terms to look up are MCAR (missing completely at random) and MNAR (missing not at random)

  • aabaker99 10 hours ago

    My intuition would be that there are certain conditions under which Bayesian inference for the missing data and multiple imputation lead to the same results.

    What is the distinction?

    The scenario described in the paper could be represented in a Bayesian method or not. “For a given missing value in one copy, randomly assign a guess from your distribution.” Here “my distribution” could be Bayesian or not but either way it’s still up to the statistician to make good choices about the model. The Bayesian can p hack here all the same.

  • DAGdug 18 hours ago

    Maybe in academia, where sketchy incentives rule. In industry, p-hacking is great till you’re eventually caught for doing nonsense that isn’t driving real impact (still, the lead time is enough to mint money).

    • light_hue_1 17 hours ago

      Very doubtful. There are plenty of drugs that get approved and are of questionable value. Plenty of procedures that turn out to be not useful. The incentives in industry are even worse because everything depends on lying with data if you can do it.

      • hggigg 17 hours ago

        Indeed. Even worse some entire academic fields are built on pillars of lies. I was married to a researcher in one of them. Anything that compromises the existence of the field just gets written off. The end game is this fed into life changing healthcare decisions so one should never assume academia is harmless. This was utterly painful watching it from the perspective of a mathematician.

      • nerdponx 17 hours ago

        I assume by "in industry" they meant in jobs where you are doing data analysis to support decisions that your employer is making. This would be any typical "data scientist" job nowadays. There the consequences of BSing are felt by the entity that pays you, and will eventually come back around to you.

        The incentives in medicine are more similar to those in academia, where your job is to cook up data that convinces someone else of your results, with highly imbalanced incentives that reward fraud.

  • fn-mote 15 hours ago

    Clearly you know your stuff. Are there any not-super-technical references where an argument against using imputation is clearly explained?

a-dub 8 hours ago

life is nothing but shaped noise