The invention of digital computation and advances in material science, as well as in mechanical and optical engineering, shifted traditional photography (which had emerged in the beginning of the 19th century) towards computational imaging and computational photography. The generalization of optics, sensors, processing and illumination enables entirely new applications that go beyond merely taking images. Lenses can be replaced with coded pinhole apertures to support a flat camera design. Efficient single pixel cameras are enabled by compressive sensing and controlled spatial-temporal illumination modulation. Computational reconstruction methods can use raw data from a larger family of sensor devices, such as metamaterial apertures, and allow advanced image processing after data acquisition, such as refocussing from light fields. In this thesis, a new type of a transparent, flexible, and scalable image sensor and its applications are demonstrated. The main component, a luminescent concentrator (LC), absorbs light through its surface and propagates it to its edges by total internal reflection. The optical design, an aperture structure around the border, yields a two-dimensional light field with a spatial and angular component that allows the reconstruction of images that are focussed on the surface of the sensor. We propose linear methods that are capable of realtime reconstruction, and a non-linear method requiring a higher computational effort, but also yielding a higher image quality. Stacking multiple LCs that absorb different sub-bands of the light spectrum then enables color imaging. Knowledge of the light transport within the LC makes it possible to reconstruct a focal stack from a single recording that can be used for depth estimation. The sensor can also be used as a non-touch interface for human-computer interaction. We show that gestures can be recognized with a high classification rate directly from the sensor values without prior image reconstruction. We also propose a method for dimensionality reduction to find minimal hardware requirements for a specific classification task (e.g. motion gestures).