Center for Advanced Signal and Image Sciences (CASIS)
Signal and Image Sciences at LLNL
Signal and image sciences enable efficient and accurate processing, generation, analysis, and interpretation of signals and images in fields such as telecommunications, medical imaging, computer vision, and more. At the Lab, they touch nearly every program and are the backbone of NIF diagnostics, nondestructive evaluation and characterization, advanced sensing, AI/machine learning, and various other critical mission roles.
Please do not alter or edit this area. This generates the image borders.
border-box-0
border-box-1
border-box-2
Technical Capabilities

Adaptive Optics
Adaptive optics involves using deformable mirrors and image processing techniques to correct wavefronts in optical systems. Correcting wavefront distortions can substantially improve resolution and enable advanced imaging systems like extremely large telescopes.

Adaptive Optics
Adaptive Optics (AO) uses deformable mirrors and image processing techniques to correct wavefronts in real-time in optical systems. Correcting phase errors can substantially improve the imaging resolution of astronomical telescopes by compensating for the atmospheric distortion of light.
LLNL has been a leader in the AO field for years, helping develop the Keck Observatory’s laser star guide, playing a major role in the UC Center for Adaptive Optics, and pioneering advanced methods uses in the LLNL-led Gemini Planet Imager project in Chile. In that time, researchers have introduced seminal advances in high-performance AO, including Fourier transform wavefront reconstruction, spatially-filtered wavefront sensing, and linear quadratic Gaussian control to predict wind-blown turbulence (shown below).

Read more:
- Space Science Institute: Adaptive Optics
- Machine learning tool fills in the blanks for satellite light curves
- LLNL engineers deliver final optical components for world’s newest telescope: the Vera C. Rubin Observatory
- Guide star leads to sharper astronomical images
- The widest, deepest images of a dynamic universe

CT Reconstruction and Analysis
Computed tomography (CT) scanning and image analysis is a way to examine the interior of objects without breaking them open. The technique involves using computers to synthesize information from a set of radiographs to generate detailed 3D images to support nondestructive characterization and evaluation.

CT Reconstruction and Analysis
Computed tomography (CT) scanning is a way to examine the interior of objects without breaking them open. In CT reconstruction, x-rays are shot at an object from multiple angles, producing radiographs. A computer synthesizes the information from the set of radiographs to generate a detailed 3D image representing the material density and composition at each point in the object. CT reconstruction is an extremely versatile technology that’s used in hospitals, nuclear waste sites, airports, and beyond.

LLNL has been a leader in CT research and deployment since 1980. Led by the Nondestructive Characterization Institute (NCI), the Lab has built a wide range of innovative CT scanners and software, and has developed techniques to inspect radioactive waste barrels, detect bomb threats, and even study fossils. LLNL and the NCI are also responsible for developing the technology behind the x-ray scanners used by airport security around the world.
Current CT-based projects include quality control for additively manufactured parts, applying deep learning techniques for image reconstruction from very few radiographs, physics experiments to research new ways of generating x-rays, characterization of contraband, and virtual reality-based inspection techniques.
Learn more:

Computer Vision and Video Analytics
Computer vision and video analytics involves processing images and videos into data to improve object recognition, change detection, and tracking in machine learning models. These techniques can enable large-scale search, retrieval, and indexing for both field and archival purposes in a wide range of applications.

Computer Vision and Video Analytics
Computer vision and video analytics involves processing images, videos, and audio into data that a computer can understand and use. With proper training, machine learning models can efficiently recognize objects in large images, detect and track motion, and label this information for use in the field or for archival purposes.
At LLNL, researchers are striving to make models more robust, more sensitive, and capable of making accurate decisions with limited information and in cluttered environments. To do this, the Lab is making strides in unsupervised feature learning for large data sets to use all available information—including audio, imagery, motion, semantic information, and tags—while exploring cutting-edge machine learning and video analysis methods to enable large-scale search, retrieval, and indexing.
Video processing is a leading challenge, and LLNL is both applying and advancing state-of-the-art techniques in the area to improve object tracking and action recognition and create spatiotemporally-aware video processing models. Computer vision techniques are also being used at NIF to identify, characterize, and track damage on optical components.
Learn more:

Machine Learning and Artificial Intelligence
Machine learning and artificial intelligence techniques are used to extract information from big or complex datasets. The techniques underpin much of modern signal and image processing and are becoming increasingly robust and capable of tackling complicated classification, clustering, and change detection problems.

Machine Learning and Artificial Intelligence
Machine learning and artificial intelligence techniques are used to sort through and extract information from large or highly complex data sets. These techniques underpin much of modern signal and image processing techniques in novel sensing research, computer vision, adaptive optics, and more and touch nearly every part of the Lab.
At LLNL, researchers use a variety of state-of-the-art machine learning algorithms to tackle complex classification, clustering, and change and anomaly detection problems for diverse types of data. The Lab’s experience and infrastructure for processing both batch and streaming big data also allows researchers to extend the capabilities of algorithms like neural networks, random forests, and dynamic belief networks to produce actionable information from complex and multimodal data. Current projects include automating and optimize manufacturing and improving the robustness of AI systems.
Learn more:

Novel Sensing
The Lab has a long history of leveraging its expertise in signal and image science to advance and expand sensing capabilities in new fields, particularly in geophysics and the biosciences. State-of-the-art sensing techniques have influenced everything from predictive models of Earth’s atmospheric conditions to studies of how cancer grows and spreads in the body.

Novel Sensing
The Lab has a long history of leveraging its expertise in signal and image science to advance sensing capabilities and apply them to new fields, particularly bioscience and earth sciences.
Earth Sciences
Novel signal and image science capabilities help the Lab study a wide range of phenomena in atmospheric, earth, and energy science, including weather, seismologic activity, gravitational variations, geophysics, and the atmosphere. Some techniques that were developed for earth science research include seismic array processing, atmospheric conditions models, and gravity gradiometry, which are all used extensively to model geophysical data and extract information from corrupted or noisy data sets. Lab researchers have helped study, model, and predict weather change trends, detect seismic "fingerprints," and model smoke from wildfires, among other projects.

Biosciences
Lab researchers have also advanced sensing capabilities to support LLNL’s trailblazing bioscience and biotechnology research. In addition to helping advance CT scans for medical examinations, signal and image sciences are key to analyzing the plethora of signals biological organisms produce and constructing complex simulation models. Some advances include catalytic template matching, predictive modeling for adverse drug reactions, and signal processing capabilities for implantable technology. Current projects range from using informatics to study cancer and cellular fluidic systems to detecting pathogens to studying microbes in the soil.

Learn more:
- Mitigating the risk of infection in combat-related injuries
- Greenland melted recently, shows high risk of sea level rise today
- LLNL-developed thin-film electrodes reveal key insight into human brain activity
- Fighting Bacterial Infections with Machine Learning
- Seismic Sleuths Set Off the Source Physics Experiment
- Satellites may have underestimated warming in the lower atmosphere

Quantum Sensing and Information Processing
Quantum sensing and information processing involves using signals to measure, control, and manipulate quantum systems. Quantum systems are poised to play a critical role in developing next-generation devices and techniques that can revolutionize capabilities ranging from navigation to materials science.

Quantum Sensing and Information Processing
Quantum sensing and information processing involves using signals that control, manipulate, and measure quantum systems. The improved sensor resolution and increased computational power that quantum devices promise may revolutionize the Lab’s capabilities for applications ranging from positioning and navigation to quantum chemistry, nuclear physics, fusion, and materials science.
LLNL’s state-of-the-art Quantum Design and Integration Testbed (QuDIT) facility supports superconducting device testing and quantum information science research, including developing algorithms and models that operate on this new computing hardware and conducting basic research with superconducting quantum devices. One current area of research is quantum optimal control, which uses classical optimization methods to compute microwave control signals for a quantum system.
Quantum sensing uses quantum systems to improve the sensitivity, speed and/or fidelity of measurements. Current projects at LLNL include improving dark matter detectors, searching for hidden objects through their gravitational signals, and using low-intensity entangled photon pairs to image biological samples.
Learn more:

Radiation Detection
The unique energy signals that radioactive isotopes emit can be used to detect, identify, and characterize radiation anomalies in complex environments. Radiation detection and isotope identification are key to finding and analyzing nuclear threats, as well as training first responders and responding to potential incidents.

Radiation Detection
The unique energy signals that radioactive isotopes emit can be used to detect, identify, and characterize the source material and specific isotope and infer the material’s composition, age, origin, and processing history. Radiation detection and isotope identification is therefore key to analyzing nuclear threats, distinguishing threats from benign sources of radiation, and responding to potential incidents.
Expertise in radiation detection has spanned decades of research at LLNL, informed by our unparalleled insight into detector physics, radiation transport codes, and nuclear materials properties. Lab researchers use statistical modeling, complex simulations, and advanced signal processing to make detection methods faster, more sensitive, and capable of scanning larger areas.
Some current projects include using quantum sensors for radiation detection and applying our expertise to simulate radiation, which can be used to test new detection devices and train first responders like the LLNL Nuclear Emergency Support Team (NEST).
Learn more:
- Data Reinforcements
- Beneath the Surface
- A Serious Game for Incident Response Training
- LLNL gamma-ray sensor has the best resolution
- Lab technologies continue to protect the nation against explosives, radiological and nuclear terrorism
- Lab analysis reveals forensic signatures of nuclear material during international smuggling exercise

Signal and Image Processing at NIF
Signal and image sciences are critical to supporting the National Ignition Facility (NIF), the world’s most powerful laser, through nuclear fusion signature imaging and analysis, diagnostic equipment, automated laser beam alignment, and inspection of optics to identify components to repair or replace.
Signal and Image Processing at NIF
Signal and image sciences are critical to the National Ignition Facility (NIF), from aligning the facility’s 192 laser beams to processing data from each shot and imaging reactions to identifying components to repair.
Analysis of Fusion Signatures
Fusion ignition generates a huge amount of raw data in the form of heat, motion, radioactive decay, neutron activity, and x-rays that together paints a comprehensive picture of each shot’s performance. NIF’s multitude of diagnostic instruments process this data to measure everything from energy yield to conditions inside the capsule, to the shape, size, and duration of the implosion and resulting “burn” of fusion reactions.

Automated Alignment
Steering 192 laser beams through a complex optical path to within 50 microns on a small fuel capsule requires incredible precision in alignment. NIF’s Automated Alignment system does this automatically in a fraction of a second by using a parallel control system composed of hundreds of control loops and devices. The backbone of the system is its image analysis algorithm, which extracts information from alignment imagery and provides feedback to the control loops. Image processing also helps align the high-powered x-ray imaging systems that take pictures of the capsule as it compresses and implodes.
Laser Optics Inspection
Image processing, pattern analysis, and machine learning play an important role in inspecting optical components for damage. These inspections prolong component lifetimes, sustain the facility, and help predict and plan for when they need to be exchanged or recycled.
Learn more: