LLNL Home S&TR Home Subscribe to S&TR Send Us Your Comments S&TR Index
Spacer Gif



Spacer Gif











 
Article title: The Sharper Image for Surveillance; images of California's Lick Observatory taken with a remote-surveillance system from 60 kilometers away.
Livermore researchers imaged the University of California’s Lick Observatory on Mount Hamilton with a remote-surveillance system from 60 kilometers away. (a) The unprocessed image is blurry. (b) The image produced using the speckle-processing technique is much clearer.

A technique adapted by Livermore scientists to take the twinkle out of stars is now being used to improve the resolution of long-range surveillance systems trained on earthbound objects. The speckle-imaging technique involves taking tens to hundreds of pictures with short-exposure times and reconstructing a single, sharp image using image-processing software.
The technique drew the interest of Livermore engineer Carmen Carrano. She developed a prototype remote-surveillance system that can produce a detailed image of a face from a couple of kilometers away. The system also helps identify vehicles tens of kilometers away and improves the viewing of large structures more than 60 kilometers away.

Short Exposures and Speckle Imaging
Typically, a person viewed 3 kilometers away with the naked eye appears as a small speck. Even high-power lenses yield little more than a blurry figure. The culprit is atmospheric turbulence, the same process that causes mirages to waver above hot asphalt on a warm day. In speckle imaging, several hundred pictures with short exposures are taken to freeze Earth’s atmospheric turbulence. The enhanced surveillance system designed by Carrano incorporates the speckle-image-processing technique, which was first developed during the 1980s for astronomical applications. Speckle imaging has been used to obtain high-resolution astronomical images such as those of the Shoemaker–Levy comet hurtling into Jupiter, satellites orbiting Earth, and Saturn’s moon Titan. (See S&TR, April 2000, A Speckled Look at Saturn’s Moon, Titan.)
To bring detail back into the blurry image of an object, astronomer’s needed a way to subtract the effects of the intervening atmosphere. Speckle imaging does this by using many short-exposure frames of the same scene. Unlike a long-exposure image, which presents a low-resolution, uniformly blurry view of the object, short-exposure images effectively freeze the atmospheric aberrations, retaining high-frequency spatial information. A short-exposure image of a point target looks like a speckle pattern, hence the name of speckle imaging. These speckles are caused by the bending and refraction of light rays as they travel through the turbulent atmosphere.
Each short-exposure frame contains high-resolution information. However, because the image is distorted, many short-exposure frames are needed to reconstruct a sharper image. Averaging the short-exposure images together will result in merely another long-exposure image. Instead, amplitude and phase information characterizing the true image are separately calculated from Fourier transforms of the short-exposure images, using a complex set of averaging procedures that have been developed. These estimates of amplitude and phase are then processed by inverse-Fourier transform to reconstruct a final, sharper image. A Fourier transform is a standard mathematical operation that converts data in one domain (for example, time) into data in another domain (for example, frequency). An inverse-Fourier transform converts the information back into the original domain. These mathematical transformations are often used to bring data into a domain that is easier to process computationally.

(a) image of a point source with a short-expsoure time. (b) Image of a point source with a long-exposure time.

(a) An image of a point source with a short-exposure time shows the speckle pattern caused by the atmosphere interfering with the light waves. (b) An image of a point source with a long-exposure time has a low resolution and is uniformly blurry.

Processing the Image
“Astronomical imaging usually involves observing a bright, compact object, such as a star, in space,” notes Carrano. “The atmospheric profile between the star and the telescope can be thought of as a shallow layer concentrated at or near the telescope pupil. But surveillance imaging is far more complex because the atmospheric turbulence is distributed over the entire path of observation, and the objects are extended over a larger field of view.”
Carrano modified the original speckle-image-reconstruction software so it could process terrestrial imaging through distributed turbulence. The raw data are composed of a sequence of two-dimensional images, such as 100 1024- by 1024-pixel images. The first step is to account for any bad pixels or groups of pixels that were obscured by dust on the optics using a process called flat fielding, which divides each frame by a normalized reference frame of a flat field.
Next, a global frame-by-frame registration is performed. Carrano explains, “During the data collection, the telescope might be shaking slightly. That factor, along with certain atmospheric effects, leads to the frames shifting horizontally and vertically.” Usually, the first frame is taken as the alignment reference, although the frame average can also be used as a reference. “We borrowed a trick from solar astronomy—breaking up the image sequence into small regions or tiles that overlap and then stitching them back together after processing,” says Carrano. “This subfield processing, or tiling, can significantly improve the quality of the reconstructed image.”
Processing to the diffraction limit for a sequence of 30 256- by 256-pixel images on a commercial laptop computer takes a few seconds, without any attempts to optimize the software. “By modifying the algorithm to continuously process data with a moving sliding window in time and implementing the algorithm into a digital-signal-processing hardware solution, we should be able to achieve the same results but in real time,” says Carrano.

Artist's rendering of an enhanced video-surveillance system including telescope optics, camera, and computer. Before and after images of a person imaged with the system.
The critical components of an enhanced video-surveillance system (telescope optics, camera, and computer) are set up to image a terrestrial object. People can be imaged from 0.5 to a few kilometers away; vehicles can be imaged from a few to more than 10 kilometers away.

Bringing It into Focus
The team has conducted many experiments over the past few years to establish the performance of these enhanced surveillance techniques. The work was funded by the Engineering Directorate as a technology-base project. Such projects represent discipline-oriented, core-competency activities that are multiprogrammatic in application, nature, and scope. The National Nuclear Security Administration’s Office of Nonproliferation Research and Engineering also funded the experiments and continues to fund the project. From creating a prototype remote-video-surveillance system and conducting initial experiments, Carrano has progressed to improving the image processing and conducting more experiments from a hillside location. She also demonstrated long-range face recognition—producing a facial image from nearly 1 kilometer away that was detailed enough to allow a correct match from a database of 685 people with commercial facial-recognition software.
“We also have been exploring ways to improve the system’s real-time capability,” says Carrano. “Earlier efforts required the object under surveillance to remain still during the 2 to 3 minutes required for data acquisition, after which it took from 10 to 30 minutes for the images to be processed with the prototype version of the software.”
The real-time imaging system uses a video-rate charge-coupled-device (CCD) camera and a Cameralink-type frame grabber, whose user interface and data-acquisition capabilities were developed by Livermore engineer Dennis Silva. Carrano converted the image-processing software—originally prototyped in Fortran and IDL—into C-programming language. “These efforts increased the speed tremendously,” notes Carrano. To further speed up the processing, she used a multiprocessor computer and parallelized the software over the server’s four processors. For example, processing a sequence of 30 1024- by 1024-pixel images on four 1.9-gigahertz Xeon processors now takes 20 seconds instead of 60 seconds with a single processor on the same machine.

Before and after images of a truck imaged with the enhanced video-surveillance system from 13 kilometers away.

The enhanced video-surveillance system reconstructed an image of a truck, moving at freeway speed, from 13 kilometers away. The system used a motion-compensation approach that reduces the artifacts by removing the background.

The real-time camera system has been tested at a range from 1 to more than 20 kilometers away and is being used to image objects in motion, such as a vehicle moving through a stationary background. With appropriate preprocessing to track and isolate the moving object of interest within the image sequence, imagery of moving objects can be successfully enhanced. “Work remains to be done in this area,” Carrano notes. “For instance, if the target is moving too fast, active-object tracking will be needed to capture enough frames for processing; we typically need at least 20 to 30 frames.”
Carrano has also enhanced the system for use in low-light situations, such as during twilight and nighttime. By adding an image intensifier to the CCD camera, she was able to use only ambient light to obtain good quality images at twilight from 1.5 kilometers away. For nighttime viewing, she used an image intensifier on the CCD camera as well as a near-infrared illuminator on the targets to push the system into creating images in the near-infrared portion of the spectrum from a kilometer away. Finally, Carrano and her colleagues developed a simulation capability to predict performance for a variety of imaging scenarios.
The high-resolution imaging capability has drawn the interest of the Department of Defense and the intelligence community. As Carrano points out, it could be useful to any organization or company that needs a detailed picture of what is occurring at a distance, such as law enforcement personnel and wildlife researchers.
“We’ve made much progress in developing and demonstrating the speckle-imaging technique for long-range surveillance imaging since just over 4 years ago when it was only an idea,” says Carrano. Eventually, in collaboration with industrial partners, we plan to create a compact, real-time system for use in the field.”
—Ann Parker

Acknowledgments: Programmatic support: James Brase and Scot Olivier; experimental support: Doug Poland, Mike Newman, Jack Tucker, Kevin Baker, Kai LaFortune, and Scott Wilks; real-time-imaging software support: Dennis Silva; early consulting support: Don Gavel and Erik Johansson; radiometry and optics support: Brian Bauman; parallel simulation implementation: Jose Milovich.

Key Words: atmospheric turbulence, enhanced surveillance, speckle imaging, image processing.

For further information contact Carmen Carrano (925) 422-9918 (carrano2@llnl.gov).

 

Download a printer-friendly version of this article.



Back | S&TR Home | LLNL Home | Help | Phone Book | Comments
Site designed and maintained by TID’s Internet Publishing Team

Lawrence Livermore National Laboratory
Operated by the University of California for the U.S. Department of Energy

UCRL-52000-05-6 | June 6, 2005