打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
Watching photons on the fly | SPIE Newsroom: SPIE

Video recording of ultrafast phenomena such as dynamic events in molecular biology would transform our understanding of a range of phenomena. However, using a detector array based on CCD or CMOS technologies is fundamentally limited by the sensor's on-chip storage and data transfer speed. To get around this problem, the most practical approach is to use a streak camera. In this ultrafast imaging device, the incident light first passes through a narrow entrance slit (usually 50μm wide) and is imaged onto the photocathode of a streak tube. Here, the incident light is converted to photoelectrons, which are accelerated by an accelerating mesh. A pair of electrodes then applies a sweeping (i.e., time-varying) voltage along the axis perpendicular to the device's entrance slit. Because of this sweeping voltage, electrons arriving at different times are deflected to different spatial positions, and these electrons are then multiplied by a microchannel plate. They subsequently bombard a phosphor screen and are converted back into light. The phosphor screen is imaged to a CCD, which records the image. However, the resultant image is normally 1D: only a single line of the scene can be seen at a time. Acquiring a 2D image requires mechanical scanning across the entire field of view, which poses severe restrictions on the recordable scenes because the event itself must be repetitive.

Previous approaches to enable 2D ultrafast imaging of nonrepetitive events include sequentially timed all-optical mapping photography and parallel streak imaging using a tilted lenslet array.1, 2 However, most of them either rely on active illumination or suffer from significant throughput loss. To overcome these limitations, we have developed a new computational ultrafast imaging method, referred to as compressed ultrafast photography (CUP), which can capture 2D dynamic scenes at up to 100 billion frames per second.3 Akin to a conventional photographic camera, CUP is receive-only, thereby allowing high-speed video recording of a variety of luminescent—such as fluorescent or bioluminescent—objects.

Based on the concept of compressed sensing, CUP works by encoding the input scene with a random binary pattern in the spatial domain, followed by shearing the resultant image in a streak camera with a fully opened entrance slit. The random binary pattern, which encodes the scene as is standard in compressed sensing, also serves as the key to unlock and retrieve time information from the input scene in the subsequent image reconstruction. CUP forms images by successively applying three operators—a spatial encoding operator, temporal shearing operator, and spatiotemporal integration operator—on the input time-lapse event I(x, y, t), where x and y are spatial coordinates, and t is time. The image is reconstructed by solving the inverse problem of these three processes. A typical input scene is regarded to be sparse in the spatiotemporal domain, meaning that it can be represented as a matrix in which a large number of elements are zeros. Under this condition, the original event datacube can be reasonably estimated by employing a two-step iterative shrinkage/thresholding algorithm.4

Figure 1 shows CUP's system schematic. The scene is first imaged onto an intermediate plane by a camera lens. Then this image is relayed to a spatial encoding unit, a digital micromirror device (DMD), which consists of tens of thousands of small MEMS (microelectromechanical systems) mirrors. Each micromirror can be individually turned ‘on’ and ‘off.’ The light reflected from the ‘on’ micromirrors is collected by a microscope objective and passed to the streak camera, while the light reflected from the ‘off’ micromirrors is out of the objective's collecting angle. In the streak camera, the entrance slit is opened to its maximum width, allowing the resultant 2D image to be temporally sheared along the vertical axis. The amount of shearing is dependent on the arrival time of incident photons. The final spatiotemporally multiplexed image is captured by a CCD within a single exposure. Using CUP, we can reconstruct an event datacube with 150×150×350 (x, y, t) voxels (volume elements).


Figure 1. Optical setup of compressed ultrafast photography (CUP). DMD: Digital micromirror device.

To demonstrate CUP, we imaged light reflection, refraction, and racing in two different media (air and resin) (see Figure 2). Our technique, for the first time, enables video recording of photon propagation at a temporal resolution down to tens of picoseconds. Additionally, we imaged an apparent faster-than-light (FTL) phenomenon. By obliquely shining a laser pulse toward a stripe pattern and monitoring the movement of the intersected wavefront, we observed movement at twice light speed. However, this FTL propagation does not violate Einstein's relativity because no actual information transmission is associated with the motion. Moreover, to further expand CUP's functionality, we added a color separation unit to the system, allowing simultaneous acquisition of a 4D datacube (x, y, t, λ), where λ is wavelength, within a single camera snapshot.


Figure 2. CUP of light propagation. (a) Laser pulse reflected from a mirror. (b) Laser pulse refracted from an air-resin interface. (c) Laser pulses racing in air and resin. Scale bar (yellow, top right image), 10mm.

CUP is a universal imaging platform that can be coupled to a variety of optical instruments, such as microscopes and telescopes, adding an unprecedented imaging speed to these modalities and facilitating new scientific discoveries at scales from cellular organelles to distant galaxies. So far our results have shown CUP can work with a photographic setup and can video record macroscopic objects. Our next step will be to show that CUP can be coupled to a high-resolution microscope and provide ultrafast dynamic movies inside a living cell.

This work was supported in part by National Institutes of Health grants DP1 EB016986 (NIH Director's Pioneer Award) and R01 CA186567 (NIH Director's Transformative Research Award). LVW has a financial interest in Microphotoacoustics Inc. and Endra Inc., which, however, did not support this work.


Liang Gao, Jinyang Liang, Lihong Wang
Washington University in St. Louis
Saint Louis, MO

Liang Gao is currently a postdoctoral research associate in biomedical engineering. He develops modern optical instrumentation for biological and medical applications. His primary research is in microscopy, including super-resolution microscopy and photoacoustic microscopy, cost-effective high-performance optics for diagnostics, ultrafast imaging, and multidimensional imaging. He received a BS degree in physics from Tsinghua University, China, in 2005 and a PhD in applied physics and bioengineering from Rice University, TX, in 2011.

Jinyang Liang is currently a postdoctoral research associate in the Department of Biomedical Engineering. His primary research focuses on the implementation of optical modulation techniques to develop modern optical instruments for applications in biology and physics. He received a PhD in electrical engineering from the University of Texas at Austin in 2012.

Lihong Wang holds the Gene K. Beare Distinguished Professorship of Biomedical Engineering. His laboratory was the first to report functional photoacoustic tomography, 3D photoacoustic microscopy, photoacoustic reporter gene imaging, the photoacoustic Doppler effect, the universal photoacoustic reconstruction algorithm, microwave-induced thermoacoustic tomography, ultrasound-modulated optical tomography, time-reversed ultrasonically encoded optical focusing, nonlinear photoacoustic wavefront shaping, compressed ultrafast photography, sonoluminescence tomography, Mueller-matrix optical coherence tomography, optical coherence computed tomography, and oblique-incidence reflectometry.


References:
1. K. Nakagawa, A. Iwasaki, Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, I. Sakuma, Sequentially timed all-optical mapping photography (STAMP), Nat. Photon. 8, p. 695-700, 2014.
2. B. Heshmat, G. Satat, C. Barsi, R. Raskar, Single-shot ultrafast imaging using parallax-free alignment with a tilted lenslet array, Conf. Lasers Electro-Opt. (CLEO), 2014.
3. L. Gao, J. Liang, C. Li, L. V. Wang, Single-shot compressed ultrafast photography at one hundred billion frames per second, Nature 516, p. 74-77, 2014.
4. J. M. Bioucas-Dias, M. A. T. Figueiredo, A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration, IEEE Trans. Image Process. 16, p. 2992-3004, 2007.
本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
高速三维光声扫描更深入探索乳腺癌
Nat Methods:高端成像技术让癌细胞无所遁形 - 肿瘤转化医学专区 - 生物谷
科学家开发新型三维光声成像设备,可对大鼠心脏和心血管无创功能成像
国际光学组织介绍(ICO、SPIE)
叶陈春博士(英籍)论文目录
俞梦越:冠状动脉内OCT技术和临床应用进展·365医学网
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服