Hyperspectral camera

Shwetak Patel (left) and Mayank Goel (right) are researchers at the UbiComp Lab at the University of Washington Department of Computer Science and Engineering. They work with Neel Joshi, a Microsoft researcher, and together created a hyperspectral camera aimed at assisting with health applications.

Can your cellphone camera see the blood flow under your skin? Can it see if that banana you just bought at Whole Foods is rotting on the inside? No, but the HyperCam, developed by researchers at the Ubiquitous Computing (UbiComp) Lab at the UW in conjunction with Microsoft can. 

According to the accompanying paper presented at the UbiComp 2015 conference in Osaka, Japan, in September, devices like the HyperCam could be used for tasks like biometric scanning, medical uses, and analyzing vegetables and fruits. Aside from its small size, the HyperCam brings a number of other unique advantages that might eventually find their way to a smartphone near you. 

A hyperspectral camera is a device that goes beyond the limitations of human vision, which only sees the world as an amalgamation of red, green, and blue (RGB) light. The UW HyperCam’s range is much broader than regular human vision and runs on a special software that makes sense of the images taken. 

“It senses near-infrared, and multiple wavelengths within the visible spectrum,” said Mayank Goel, a fifth-year Ph.D. student in the computer science department, and lead author of the paper. “The human eye can only sense red, green, and blue. But there are lots of colors between that. It looks at those colors as well.”

Goel was introduced to the hyperspectral camera at Microsoft Research as a summer intern. The researchers there were already exploring the possibility of a low-cost solution.

“We had him [Goel] work on software for the project,” said Neel Joshi, a researcher at the Graphics Group at Microsoft Research. “I wrote some of the initial software using my own code and an API from Marcel Gavriliu, who designed and built the hardware. Mayank built his image processing algorithms on top of this and continued the work after the internship ended.”

Goel read more about the technology, made some modifications, and started taking pictures of things he saw around him, including his hands and edibles, such as fruits. 

“I realized that I could see whether an apple or any fruit was ripening or not,” Goel said. “I had a blood pressure cuff, and as I tried to constrict my blood flow, I could see my hand become lighter in color on the camera.”

Upon further research Goel saw the camera had already been used by grocery stores and in other industrial applications. But it was not in the hands of consumers and research scientists, and upon speculation, Goel realized why that was not the case: high cost and high complexity.

“There is no easy way for us to discover how to use a hyperspectral camera,” Goel said. “[We decided to] make a cheaper camera, and also make it smart, so whatever you put in front of it, it tries its best to show what might be interesting.”

In effect, the camera sees the same object a human sees in a very different way. It does this using 17 different LEDs to provide illumination for short periods of time while taking pictures. These pictures can be individually evaluated by humans, but the information shown isn’t easily understandable. To speed up the analysis, a special software has been designed for the HyperCam.

“The hardware is an engineering challenge, but still something that many people know how to make,” Goel said. “The main difference is in the software. The software makes the camera smart. It sees a scene and it tries to figure out ‘what is interesting here? What could be something here that the user would like to see?’”

Usually it picks the image that is most distinct from a standard RGB image. For instance, if it sees a spot on an apple that is otherwise invisible in RGB, it alerts the user. 

“It looks at interesting combinations of those pictures and finds the image or combination of images that’s furthest away of what you would see in RGB,” said Alex Mariakakis, a third-year Ph.D. student at the UbiComp Lab. “With that knowledge, maybe you can make another hyperspectral camera for an application that only needs four LEDs.” 

Mariakakis joined Goel’s undertaking when he was looking for a security project. He applied the HyperCam to biometrics. 

“By using different wavelengths you can get from HyperCam, you can actually pick out different features in the hand,” Mariakakis said. “We can get the texture of the skin and the vein pattern. Using those together, we do some computer vision and combine those features to basically give unique signatures to each person.”

There are still problems with the technology, though. It doesn’t work very well in bright ambient light. One possible solution would be to detect ambient light and then subtract it to get the hyperspectral image. 

“This is a hard problem to overcome,” Joshi said. “If the ambient light is too bright, at some point as the ambient light gets brighter, the on-board lights can’t be detected and then the hyperspectral images won’t be visible.”

Despite the flaws, Goel is hopeful the HyperCam will eventually be seen on devices like cellphones. 

“Having a camera like this on a consumer device isn’t far fetched,” Goel said. “It can be used in a lot of applications like health, biometrics, and produce monitoring.”

 

Reach reporter Arunabh Satpathy at science@dailyuw.com. Twitter: @sarunabh

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.