Researchers have taken photographs of children’s retinas and screened them using a deep learning AI algorithm to diagnose autism with 100% accuracy. The findings support using AI as an objective screening tool for early diagnosis, especially when access to a specialist child psychiatrist is limited.
A full 100% sounds weird. It means complete overlap with the ASD assessment which itself isn’t bulletproof. Weird like there were some mistakes in the data. E.g. all ASD pictures taken on the same day and getting a date timestamp, “ASD” written in the metadata or filename, or different light in different lab.
I didn’t see any immediate problems in the published paper, but if these were my results I’d be to worried to publish it.
It sounds like the model is overfitting the training data. They say it scored 100% on the testing set of data which almost always indicates that the model has learned how to ace the training set but flops in the real world.
I think we shouldn’t put much weight behind this news article. This is just more overblown hype for the sake of clicks.
The paper mentioned how the images were processed (chopping 10% off some to remove name, age, etc). But all were from the same centre and only pixel data was used. Given the other work referenced on retinal thinning in ASD disorders, maybe it is a relatively simple task for this kind of model. But they do say using multi-centre images will be an important part of the validation. It’s quite possible the performance would drop away when differences in camera, etc. are factored in.
A full 100% sounds weird. It means complete overlap with the ASD assessment which itself isn’t bulletproof. Weird like there were some mistakes in the data. E.g. all ASD pictures taken on the same day and getting a date timestamp, “ASD” written in the metadata or filename, or different light in different lab.
I didn’t see any immediate problems in the published paper, but if these were my results I’d be to worried to publish it.
It sounds like the model is overfitting the training data. They say it scored 100% on the testing set of data which almost always indicates that the model has learned how to ace the training set but flops in the real world.
I think we shouldn’t put much weight behind this news article. This is just more overblown hype for the sake of clicks.
The article says they kept 15% of the data for testing, so it’s not overfitting. I’m still skeptical though.
I’m pretty sure it’s possible to overfit even with large testing sets.
deleted by creator
The paper mentioned how the images were processed (chopping 10% off some to remove name, age, etc). But all were from the same centre and only pixel data was used. Given the other work referenced on retinal thinning in ASD disorders, maybe it is a relatively simple task for this kind of model. But they do say using multi-centre images will be an important part of the validation. It’s quite possible the performance would drop away when differences in camera, etc. are factored in.