MobileNews

Google’s Pixel 4 gets a new life as its face unlock cameras are used for Alzheimer’s research

We’re almost three full generations past Google’s Pixel 4 series – which was a flop – but the hardware isn’t gone for good. While we argue it deserves more software updates, the Pixel 4 and its face unlock cameras are being used to further medical research for diseases, including Alzheimer’s.

How can the Pixel 4 detect Alzheimer’s?

A group of researchers based in the DigiHealth Lab at UC San Diego are researching diseases such as Alzheimer’s and how they can be tested in ways that are easy and ubiquitous. Led by Professor Edward Wang, the latest way they’ve found to do that is with Google’s Pixel 4 (per The Verge).

Alzheimer’s can be a terrifying neurological disease, one that affects almost 6 million people over the age of 65 in the US, according to The Mayo Clinic. Alzheimer’s causes those afflicted by it to developer serious memory issues and, eventually, causes them to lose the ability to carry out their everyday lives.

But, early detection of Alzheimer’s can make an impact on treatment by slowing down the symptoms.

That’s why being able to use a smartphone, such as the Pixel 4, to detect Alzheimer’s could be a big deal.

In a video interview with The Verge, the researchers behind this project explain that our eyes are a metric for detecting Alzheimer’s, and specifically the pupil of your eyes. A recording of your pupil’s response to specific tasks allows researchers and scientists (or in this case, an app) to identify cognitive impairments based on how your pupil functions during those tasks.

On the Pixel 4, the app is able to do this by using the IR cameras that Google uses for face unlock. This is used because IR cameras are better at detecting the difference between your eye color and the pupil in particular than a standard camera dealing with the visible light spectrum would be, as can be seen below.

The app helps users conduct an exam of their own pupils by putting their eye up to the camera and running a memory test. The app records the pupil and sends that data back to the lab – it doesn’t give you a result right there and then. It’s not exactly convenient, but the key thing to remember is what this is replacing.

In The Verge’s video, a researcher explained that this sort of measurement would usually be done in a clinical lab under controlled settings and with a special piece of hardware that can cost around $10,000. A Pixel 4 won’t replace that device, but this study is being done to figure out if it’s feasible to use data collected by a smartphone at home that doctors could use to see if further testing is needed.

Could any other phones do this?

The tough part of this study, currently, is that phones with IR cameras such as the Pixel 4’s don’t really fit the definition of ubiquitous. Google only employed IR cameras on the Pixel 4, and subsequently dropped them on the following Pixel 5 and newer Pixel 6 as those phones skipped face unlock. Samsung also previously used IR cameras but ditched those as it eliminated the bezels around screens.

Currently, Apple’s iPhone is one of the only phones that uses IR cameras on a wide basis, doing so for the same reason as the Pixel 4, for face unlock. However, the locked-down nature of iOS means that a third-party app, such as this one being used by researchers, can’t access that hardware.

Our smartphones are full of possibilities like this

While this use case in particular has yet to be fully proven out, it’s just another example of how much we can do with our smartphones in the field of health. Looking at Google, in particular – there’s a ton the company has done with just the most basic parts of our smartphones in terms of gathering useful health information.

Last year, Google Fit rolled out the ability for Pixel smartphones to track a user’s heart rate using nothing more than the camera sensors – no wearable required.

Google also previously announced a tool that would allow users to simply take a picture of any skin condition to help identify what it might be, with the tool being able to recognize nearly 300 conditions.



Author: Ben Schoon
Source: 9TO5Google

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!