pixels and pictures
18.2K views | +0 today
Follow
pixels and pictures
Exploring the digital imaging chain from sensors to brains
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

L’iPhone 12 consacre la photographie computationnelle, Apple la fait entrer dans la lumière

L’iPhone 12 consacre la photographie computationnelle, Apple la fait entrer dans la lumière | pixels and pictures | Scoop.it

Depuis de nombreuses années déjà, tous les smartphones se ressemblent, et se ressemblent de plus en plus : une dalle noire aux bords arrondis. C’est le dos du téléphone qui révèle sa marque et parfois son modèle à tous ceux qui vous font face. Leurs capacités photographiques restent d’ailleurs aujourd’hui le principal élément différentiant – car il est visible ? – des nouveaux modèles de chaque marque.

 

La photo, c'est du calcul

 

La présentation le 13 Octobre de l’iPhone 12 faisait ainsi la part belle à ses capacités photo et vidéo, et plus précisément à ce que permettait en la matière la puissance phénoménale des 12 milliards de transistors de la puce A14 «Bionic», qu’il s’agisse des cœurs de l’unité centrale (CPU), du processeur graphique (GPU), du sous-système dédié au traitement d’images (IPS), sans oublier le module d’intelligence artificielle (NLP).

 

Comme si les optiques et les capteurs eux-mêmes, malgré des performances accrues – avec un système de stabilisation optique du capteur lui-même et non des lentilles -, passaient au second plan, éclipsées par une multitude de traitements opérant en temps réel et mobilisant les 3/4 du chipset pendant la prise de vues. Alors que la concurrence a vainement tenté de suivre Apple sur le nombre d’objectifs (seule manière d’assurer un zoom optique de qualité), puis de se différencier par la résolution en affichant des quantités de pixels toujours plus délirantes, la firme à la pomme s’en tient toujours à une résolution de 12 MP, et ce depuis l’iPhone X.

 

La différence est ailleurs, et réside dans la quantité ahurissante de traitements effectués avant, pendant, et après ce que vous croyez être une seule prise de vue : la vôtre. Apple a d’ailleurs reconnu cette évolution en utilisant le terme de «photographie computationnelle» de manière explicite durant l’événement. Il y avait pourtant un précédent.

Philippe J DEWOST's insight:

La numérisation de l'appareil photo est achevée, place à la numérisation du photographe lui même

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Here is what may be the first public demo of iPhone 11’s Night Mode, posted by model Coco Rocha.

Here is what may be the first public demo of iPhone 11’s Night Mode, posted by model Coco Rocha. | pixels and pictures | Scoop.it

From Inside Apple newsletter - Sept 13th Edition

The camera hardware in the new iPhones is certainly impressive, but the biggest implications for the practice of photography are in software.

Philippe Dewost, an Inside Apple reader and former CEO of an imaging startup acquired by Apple, wrote this incisive blog post about how 95 percent of what your phone captures in a photo is not captured but rather generated by computational photography. You can see the evidence in what may be the first public demo of iPhone 11’s Night Mode, posted by model Coco Rocha. — PHILIPPE DEWOST’S LIGHT SOURCES

Philippe J DEWOST's insight:

Illuminating example of how Computational Photography is redefining the very act of "taking pictures"

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Depth-sensing imaging system can peer through fog: Computational photography could solve a problem that bedevils self-driving cars

Depth-sensing imaging system can peer through fog: Computational photography could solve a problem that bedevils self-driving cars | pixels and pictures | Scoop.it

An inability to handle misty driving conditions has been one of the chief obstacles to the development of autonomous vehicular navigation systems that use visible light, which are preferable to radar-based systems for their high resolution and ability to read road signs and track lane markers. So, the MIT system could be a crucial step toward self-driving cars.

 

The researchers tested the system using a small tank of water with the vibrating motor from a humidifier immersed in it. In fog so dense that human vision could penetrate only 36 centimeters, the system was able to resolve images of objects and gauge their depth at a range of 57 centimeters.

 

Fifty-seven centimeters is not a great distance, but the fog produced for the study is far denser than any that a human driver would have to contend with; in the real world, a typical fog might afford a visibility of about 30 to 50 meters. The vital point is that the system performed better than human vision, whereas most imaging systems perform far worse. A navigation system that was even as good as a human driver at driving in fog would be a huge breakthrough.

 

"I decided to take on the challenge of developing a system that can see through actual fog," says Guy Satat, a graduate student in the MIT Media Lab, who led the research. "We're dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios."

 

Satat and his colleagues describe their system in a paper they'll present at the International Conference on Computational Photography in May. Satat is first author on the paper, and he's joined by his thesis advisor, associate professor of media arts and sciences Ramesh Raskar, and by Matthew Tancik, who was a graduate student in electrical engineering and computer science when the work was done.

Philippe J DEWOST's insight:

Ramesh Raskar in the mist ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Cambridge-based image fusion pioneer Spectral Edge announces successful £1.5m funding round

Cambridge-based image fusion pioneer Spectral Edge announces successful £1.5m funding round | pixels and pictures | Scoop.it
Cambridge-based image fusion pioneer attracts major backing to commercialise product portfolio

Spectral Edge, (http://www.spectraledge.co.uk/) today announced the successful completion of an oversubscribed £1.5 million second funding round. New lead investors IQ Capital and Parkwalk Advisors, along with angel investors from Cambridge Angels, Wren Capital, Cambridge Capital Group and Martlet, the Marshall of Cambridge Corporate Angel investment fund, join the Rainbow Seed Fund/Midven and Iceni in backing the company.

Spun out of the University of East Anglia (UEA) Colour Lab, Spectral Edge has developed innovative image fusion technology. This combines different types of image, ranging from the visible to invisible (such as infrared and thermal), to enhance detail, aid visual accessibility, and create ever more beautiful pictures. 

Spectral Edge’s Phusion technology platform has already been proven in the visual accessibility market, where independent studies have shown that it can transform the TV viewing experience for the estimated 4% of the world’s population that suffers from colour-blindness. It enhances live TV and video, allowing colour-blind viewers to differentiate between colour combinations such as red-green and pink-grey so that otherwise inaccessible content such as sport can be enjoyed. 

The new funding will be used to expand Spectral Edge’s team, increase investment in sales and marketing, and underpin development of its product portfolio into IP-licensable products and reference designs. Spectral Edge is mainly targeting computational photography, where blending near-infrared and visible images gives higher quality, more beautiful results with greater depth. Other applications include security, where the combination of visible and thermal imaging enhances details to provide easier identification of people filmed on surveillance cameras, as well as visual accessibility through its Eyeteq brand.

"Spectral Edge is a true pioneer in the field of photography. They are set to disrupt and transform the imaging sector, not just within consumer and professional photography, but also across a broad range of business sectors,” said Max Bautin, Managing Partner at IQ Capital. "Backed by a robust catalogue of IP, Spectral Edge’s technology enables individuals and companies to take pictures and record videos with unparalleled detail by taking advantage of non-visible information like near-infra red and heat. We are proud to add Spectral Edge to our portfolio of companies. We back cutting-edge IP-rich technology which pushes the boundaries but also has a proven track record of experiencing stable growth, and Spectral Edge fits that mould perfectly."

“We are delighted to support Professor Graham Finlayson and his team at Spectral Edge,” said Alastair Kilgour CIO Parkwalk Advisors. “We believe Phusion could prove to be a substantial enhancement to the quality of digital imaging and as such have significant commercial prospects.” 

Spectral Edge is led by an experienced team that combines deep technical and business experience. It includes Professor Graham Finlayson, Head of Vision Group and Professor of Computing Science, UEA, Christopher Cytera (managing director) and serial entrepreneur Dr Robert Swann (chairman).

Philippe J DEWOST's insight:

Looks like imsense founder and IQ Capital are doing it again #BeenThereDoneThat. Congratulations Graham !

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

The mastermind of Google’s Pixel camera, Marc Levoy, quietly left the company in March

The mastermind of Google’s Pixel camera, Marc Levoy, quietly left the company in March | pixels and pictures | Scoop.it

Two key Pixel execs, including the computer researcher who led the team that developed the computational photography powering the Pixel’s camera, have left Google in recent months, according to a new report from The Information. The executives who left are distinguished engineer Marc Levoy and former Pixel general manager Mario Queiroz.

 

Queiroz had apparently already moved off the Pixel team two months before the launch of the Pixel 4 into a role that reported directly to Google CEO Sundar Pichai. However, he left in January to join Palo Alto Networks, according to The Information and his LinkedIn. Levoy left Google in March, which is also reflected on his LinkedIn.

Philippe J DEWOST's insight:

Optical Destabilization is underway at Google. I was lucky to meet Marc Levoy 10 yrs ago while I was running #imsense #eye-fidelity, impressive engineer, he will be a great loss.

Philippe J DEWOST's curator insight, May 14, 2020 4:53 AM

Optical Destabilization is underway at Google

Scooped by Philippe J DEWOST
Scoop.it!

With Computational Photography, 95% of what you shoot is no longer captured but generated

With Computational Photography, 95% of what you shoot is no longer captured but generated | pixels and pictures | Scoop.it

Capturing light is becoming a thing of the past : with computational photography it is now eclipsed by processing pixels.
iPhone 11 and its « Deep Fusion » mode leave no doubt that photography is now software and that software is also eating cameras.

Philippe J DEWOST's insight:

Software is eating the world, and yesterday's Apple keynote shows that cameras are in the menu.

Philippe J DEWOST's curator insight, September 11, 2019 1:29 PM

Software is eating the world, and yesterday's Apple keynote shows that cameras are in the menu.

Scooped by Philippe J DEWOST
Scoop.it!

New algorithm lets photographers change the depth of images virtually

New algorithm lets photographers change the depth of images virtually | pixels and pictures | Scoop.it

Researchers have unveiled a new photography technique called computational zoom that allows photographers to manipulate the composition of their images after they've been taken, and to create what are described as "physically unattainable" photos. The researchers from the University of California, Santa Barbara and tech company Nvidia have detailed the findings in a paper, as spotted by DPReview.

 

In order to achieve computational zoom, photographers have to take a stack of photos that retain the same focal length, but with the camera edging slightly closer and closer to the subject. An algorithm and the computational zoom system then spit out a 3D rendering of the scene with multiple views based on the photo stack. All of that information is then “used to synthesize multi-perspective images which have novel compositions through a user interface” — meaning photographers can then manipulate and change a photo’s composition using the software in real time.

 

 

 

The researchers say the multi-perspective camera model can generate compositions that are not physically attainable, and can extend a photographer’s control over factors such as the relative size of objects at different depths and the sense of depth of the picture. So the final image isn’t technically one photo, but an amalgamation of many. The team hopes to make the technology available to photographers in the form of software plug-ins, reports DPReview.

Philippe J DEWOST's insight:

Will software become more successful than lightfield cameras ?

No comment yet.