Light-field technology has the potential to supersede traditional systems for imaging and illumination. Light fields parametrize incoming and outgoing light rays using a four-dimensional function and, thus, support various processing operations that are impossible with classical systems. For example, a regular camera simply records a two-dimensional image, while a light-field camera records additional directional information. A light-field photograph allows the changing of parameters such as aperture, focus, and perspective after recording. Light-field technology in microscopes supports fast optical volumetric readouts and precise, controllable four-dimensional illumination. Therefore, certain parts of a probe can be interactively illuminated while viewing the volume. Light fields require a substantial number of samples to prevent undersampling problems and to guarantee high-quality results. Compact light-field systems with integrated microlens arrays (MLAs) multiplex spatial and directional information on a single sensor or with a single light modulator. The sensor or light modulator needs to be of high resolution to avoid spatial undersampling. Wide aperture light-field systems such as camera arrays or light domes suffer from severe artifacts if too few directional samples are used. Scanning or recording insufficient temporal samples results in infeasible sampling times and serious motion artifacts. Thus, to adequately sample light fields in the spatial, directional, and temporal domain, complex sampling devices might be needed, leading to a limited number of samples due to resolution limitations, bandwidth restrictions, exposure durations, or budget reasons. While common approaches deal with these constraints by distributing available samples uniformly, we propose non-uniform sample placements in this thesis. Furthermore, we apply upsampling techniques to achieve qualitative results as if significantly more samples have been used. The first part of this thesis presents a method for coded recording of high-dynamic-range (HDR) light-field videos. Capturing exposure sequences is a common technique for HDR imaging but reduces the frame rate and causes motion blur in cases of camera movement. We decrease capturing time and reduce motion blur for HDR light-field video recording by applying a spatiotemporal exposure pattern while recording frames with a camera array. Subsequently, we apply a specialized deblurring and reconstruction technique that generates results with the same dynamic range when compared to regular exposure sequencing. In the next part of this thesis, we focus on angular superresolution approaches for light fields captured with sparse camera arrays and for reflectance fields recorded with sparse light domes. We derive optimal sampling masks for a desired configuration and directionally upsample the recorded light-field and reflectance data. One of our contributions is the use of local dictionaries---extracted directly from the scene---for upsampling. Our methods are applicable to arbitrary scenes because we avoid the need for depth reconstruction, which often fails for complex scene effects such as transparency and reflections. In the last part of this dissertation, we explain how to concentrate light simultaneously at multiple selected volumetric positions. We use a light-field microscope to record a volume and subsequently illuminate individual probe particles by means of a four-dimensional illumination light field. One of our contributions is a temporal coding strategy to significantly improve scanning time for scattering and non-scattering probes. The methods presented in this work might enhance future light-field systems such as cameras, light stages, and microscopes.