A video is a temporally ordered sequence of images. Videos capture appearance from a fixed set of viewpoints, that is where the camera is placed during recording (acquisition). A light field generalizes this idea, by encoding enough information in the recorded data that pictures may be rendered from arbitrarily chosen viewpoints. Typically, light fields are recorded by using several pictures of the environment that are acquired at a densely distributed set of viewpoints. Two popular strategies for acquisition are to either use a dense array of cameras or a single camera with an array of lenses. Modelling the light field requires a good parametrization of the set of oriented lines. In this paper, we explore two techniques for efficiently acquiring, storing and reconstructing light fields in uniform and non-uniform fashion. Both techniques sample the light field by sampling the set of lines that intersect the plane and sphere. The first approach relies on intersecting camera rays with two planes or a sphere and storing this intersection points in data structure. The second method allows uniform sampling of all five dimensions of the light field, using the directional space and plane intersection. We conclude that, though the first approach has sampling biases and disparity problem, it gives better reconstruction and smooth video sequence compared to second technique.

4D light-field models for view interpolation

Micheale Hadera;
2017-01-01

Abstract

A video is a temporally ordered sequence of images. Videos capture appearance from a fixed set of viewpoints, that is where the camera is placed during recording (acquisition). A light field generalizes this idea, by encoding enough information in the recorded data that pictures may be rendered from arbitrarily chosen viewpoints. Typically, light fields are recorded by using several pictures of the environment that are acquired at a densely distributed set of viewpoints. Two popular strategies for acquisition are to either use a dense array of cameras or a single camera with an array of lenses. Modelling the light field requires a good parametrization of the set of oriented lines. In this paper, we explore two techniques for efficiently acquiring, storing and reconstructing light fields in uniform and non-uniform fashion. Both techniques sample the light field by sampling the set of lines that intersect the plane and sphere. The first approach relies on intersecting camera rays with two planes or a sphere and storing this intersection points in data structure. The second method allows uniform sampling of all five dimensions of the light field, using the directional space and plane intersection. We conclude that, though the first approach has sampling biases and disparity problem, it gives better reconstruction and smooth video sequence compared to second technique.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1272138
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact