Connect with us


AI Model Drastically Reduces Eye Exam Errors

Updated on

Researchers have recently designed an artificial intelligence algorithm that seems to more accurately diagnose vision problems than the classic vision tests currently used by doctors. According to Science, the new test can reportedly reduce diagnostic error for eye examinations by around 74%.

Ophthalmologists have been using the same vision test, the classic eye exam based on charts with different sized letters and symbols, for decades. The results of the test are up to the ophthalmologist to interpret, and of course, there can be errors in interpreting the results and giving a diagnosis. Researchers from Stanford University aimed to improve upon these tests with an AI algorithm.

According to a computer scientist at Stanford, Chris Piech, part of the issue with the traditional tests is that when the letters become too blurry for the test subject to see, the subject begins to guess at the letters. This guessing means that the results of the test can vary if a person takes the test multiple times. In order to develop a test with better accuracy and replicability, Piech and colleagues created an online test, with the results of the test being used to train an AI model. The online test first walks the user through the process of calibrating their screen. After the screen is calibrated, the user enters their distance from the screen and then the program displays a letter “E”, which appears in various orientations. After this is done, the model assigns the user a vision score, based on a statistical model. The program asks 20 questions for each eye, updating its vision score as it does so, and then it renders a prediction based on the vision score.

The research team ran their model through 1000 computer simulations that emulated the inputs of genuine patients. The computer simulation operates by being primed with a known vision acuity score and then making the types of errors that a person might make while taking the test. The researchers conducted the tests in this fashion because for every test there is a “true” acuity score, which isn’t the case when a human takes the test.  According to the researchers, their model was able to reduce diagnostic error by around three quarters (74%) in comparison to the classic vision tests. Despite these fairly impressive results, Piece and colleagues caution that the model isn’t intended to replace doctors, rather it’s a tool that doctors could potentially use to enhance the accuracy of a diagnosis.

Ophthalmologist Mark Blecher opined to Science that while the program is a helpful and clever implementation of AI models for ophthalmology, the researchers should also take into account things like the environment in which the test subject is taking the test, as these attributes can influence test results as well. Beyond this, the Blecher anticipates the researchers may have difficulty in getting ophthalmologists to use their new model and agree on a new standard, as the status quo can be hard to overturn.

The research done by Piech and colleagues isn’t the only recent development concerning both AI and vision. Recently, Google developed an AI model that could occasionally outperform clinicians at identifying common eye conditions that can lead to loss of vision. Google DeepMind collaborated with Moorfields Eye Hospital to develop a model that could meaningfully predict the chance that a patient might develop a severe form of macular degeneration. Elsewhere, an Israeli startup by the name of AEYE Health utilized computer vision techniques and machine learning to develop retinal scanners that can potentially do basic, accurate recognition of common eye conditions, referring the patient to a doctor if the diagnosis is positive.