pixels and pictures
18.2K views | +0 today
Follow
pixels and pictures
Exploring the digital imaging chain from sensors to brains
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

Depth-sensing imaging system can peer through fog: Computational photography could solve a problem that bedevils self-driving cars

Depth-sensing imaging system can peer through fog: Computational photography could solve a problem that bedevils self-driving cars | pixels and pictures | Scoop.it

An inability to handle misty driving conditions has been one of the chief obstacles to the development of autonomous vehicular navigation systems that use visible light, which are preferable to radar-based systems for their high resolution and ability to read road signs and track lane markers. So, the MIT system could be a crucial step toward self-driving cars.

 

The researchers tested the system using a small tank of water with the vibrating motor from a humidifier immersed in it. In fog so dense that human vision could penetrate only 36 centimeters, the system was able to resolve images of objects and gauge their depth at a range of 57 centimeters.

 

Fifty-seven centimeters is not a great distance, but the fog produced for the study is far denser than any that a human driver would have to contend with; in the real world, a typical fog might afford a visibility of about 30 to 50 meters. The vital point is that the system performed better than human vision, whereas most imaging systems perform far worse. A navigation system that was even as good as a human driver at driving in fog would be a huge breakthrough.

 

"I decided to take on the challenge of developing a system that can see through actual fog," says Guy Satat, a graduate student in the MIT Media Lab, who led the research. "We're dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios."

 

Satat and his colleagues describe their system in a paper they'll present at the International Conference on Computational Photography in May. Satat is first author on the paper, and he's joined by his thesis advisor, associate professor of media arts and sciences Ramesh Raskar, and by Matthew Tancik, who was a graduate student in electrical engineering and computer science when the work was done.

Philippe J DEWOST's insight:

Ramesh Raskar in the mist ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Scientists reconstruct speech through soundproof glass by watching a bag of potato chips

Scientists reconstruct speech through soundproof glass by watching a bag of potato chips | pixels and pictures | Scoop.it

Your bag of potato chips can hear what you're saying. Now, researchers from MIT are trying to figure out a way to make that bag of chips tell them everything that you said — and apparently they have a method that works. By pointing a video camera at the bag while audio is playing or someone is speaking, researchers can detect tiny vibrations in it that are caused by the sound. When later playing back that recording, MIT says that it has figured out a way to read those vibrations and translate them back into music, speech, or seemingly any other sound.

Philippe J DEWOST's insight:

Throw your bag of chips before engaging in a confidential conversation. And avoid any line of sight.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

MIT's Halide programming language can dramatically speed imaging processing

MIT's Halide programming language can dramatically speed imaging processing | pixels and pictures | Scoop.it

A new programming language for image-processing algorithms yields code that runs much faster, reports the Massachusetts Institute of Technology — and this could lead to much better in-camera performance in dedicated devices and smart phones.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Newly Launched EyeNetra Mobile Eye-Test Device Could Lead To Prescription Virtual-Reality Screens

Newly Launched EyeNetra Mobile Eye-Test Device Could Lead To Prescription Virtual-Reality Screens | pixels and pictures | Scoop.it

After five years of development and about 40,000 tests worldwide, the smartphone-powered eye-test devices developed by MIT spinout EyeNetra is coming to hospitals, optometric clinics, optical stores, and even homes nationwide.

But on the heels of its commercial release, EyeNetra says it’s been pursuing opportunities to collaborate with virtual-reality companies seeking to use the technology to develop “vision-corrected” virtual-reality displays.

“As much as we want to solve the prescription glasses market, we could also [help] bring virtual reality to the masses,” says EyeNetra co-founder Ramesh Raskar, an associate professor of media arts and sciences at the MIT Media Lab who co-invented the device.

The device, called Netra, is a plastic, binocular-like headset. Users attach a smartphone, with the startup’s app, to the front and peer through the headset at the phone’s display. Patterns, such as separate red and green lines or circles, appear on the screen. The user turns a dial to align the patterns and pushes a button to lock them in place. After eight interactions, the app calculates the difference between what the user sees as “aligned” and the actual alignment of the patterns. This signals any refractive errors, such as nearsightedness, farsightedness, and astigmatism. The app then displays the refractive powers, axis of astigmatism, and pupillary distance required for eyeglasses prescriptions.

In April, the startup launched Blink, an on-demand refractive test service in New York, where employees bring the startup's optometry tools, including the Netra device, to people’s homes and offices. In India, EyeNetra has launched Nayantara, a similar program to provide low-cost eye tests to the poor and uninsured in remote villages, far from eye doctors. Both efforts used EyeNetra’s suite of tools, now available for eye-care providers worldwide.

According to the World Health Organization, uncorrected refractive errors are the world’s second-highest cause of blindness. EyeNetra originally invented the device for the developing world — specifically, for poor and remote regions of Africa and Asia, where many people can’t find health care easily. India alone has around 300 million people in need of eyeglasses.

Philippe J DEWOST's insight:

Interesting crossroads between VR and healthcare, and a sound reminder of how incredibly powerful smartphones have become !

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Low-cost 'nano-camera' developed that can operate at the speed of light | NDTV Gadgets

Low-cost 'nano-camera' developed that can operate at the speed of light | NDTV Gadgets | pixels and pictures | Scoop.it

Researchers at MIT Media Lab have developed a $500 "nano-camera" that can operate at the speed of light. According to the researchers, potential applications of the 3D camera include collision-avoidance, gesture-recognition, medical imaging, motion-tracking and interactive gaming.


The team which developed the inexpensive "nano-camera" comprises Ramesh Raskar, Achuta Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi at MIT, and Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand.

 

The nano-camera uses the "Time of Flight" method to measure scenes, a method also used by Microsoft for its new Kinect sensor that ships with the Xbox One. With this Time of Flight, the location of objects is calculated by how long it takes for transmitted light to reflect off a surface and return to the sensor. However, unlike conventional Time of Flight cameras, the new camera will produce accurate measurements even in fog or rain, and can also correctly locate translucent objects.

Philippe J DEWOST's insight:

Meet the nano-camera, the $500 little sister of 2011 $500.000 femto-camera...

No comment yet.