A team of researchers has come up with a new technique for faster single-pixel camera.
Compressed sensing is an exciting new computational technique for extracting large amounts of information from a signal. In one high-profile demonstration, for instance, researchers at Rice University built a camera that could produce 2-D images using only a single light sensor rather than the millions of light sensors found in a commodity camera.
But using compressed sensing for image acquisition is inefficient: That "single-pixel camera" needed thousands of exposures to produce a reasonably clear image.
Researchers from the MIT Media Lab now describe a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens.
One intriguing aspect of compressed-sensing imaging systems is that, unlike conventional cameras, they don't require lenses. That could make them useful in harsh environments or in applications that use wavelengths of light outside the visible spectrum. Getting rid of the lens opens new prospects for the design of imaging systems.
"Formerly, imaging required a lens, and the lens would map pixels in space to sensors in an array, with everything precisely structured and engineered," said first author Guy Satat.
He added, "With computational imaging, we began to ask: Is a lens necessary? Does the sensor have to be a structured array? How many pixels should the sensor have? Is a single pixel sufficient? These questions essentially break down the fundamental idea of what a camera is. The fact that only a single pixel is required and a lens is no longer necessary relaxes major design constraints, and enables the development of novel imaging systems. Using ultrafast sensing makes the measurement significantly more efficient."
The technique uses time-of-flight imaging, but somewhat circularly, one of its potential applications is improving the performance of time-of-flight cameras.
Researchers presented a theoretical analysis of compressed sensing that uses time-of-flight information. Their analysis shows how efficiently the technique can extract information about a visual scene, at different resolutions and with different numbers of sensors and distances between them.
They also describe a procedure for computing light patterns that minimizes the number of exposures. And, using synthetic data, they compare the performance of their reconstruction algorithm to that of existing compressed-sensing algorithms. But in ongoing work, they are developing a prototype of the system so that they can test their algorithm on real data.
The study appears in the journal IEEE Transactions on Computational Imaging.
Disclaimer: No Business Standard Journalist was involved in creation of this content
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
