01 · STATE
Raw returns
Time-of-flight points, radar bursts, SAR phase histories — sensor outputs in their native form, before any interpretation.
— Layered Combat Intelligence Center
LCIC trains and deploys machine-learning models on LiDAR, radar, SAR, and EO/IR returns — turning raw sensor data into actionable intelligence at the tactical edge.
— How it works
Every detection traverses four states. Our models compress that path so decisions happen in milliseconds, not minutes.
01 · STATE
Time-of-flight points, radar bursts, SAR phase histories — sensor outputs in their native form, before any interpretation.
02 · STATE
Spatial indexing converts unstructured returns into a regular 3D lattice — the substrate every downstream model operates on.
03 · STATE
Domain-tuned 3D object detectors classify each occupied region: vehicle, vessel, UAS, structure. Bounded, scored, ranked.
04 · STATE
Associations across frames yield persistent tracks — kinematics, intent estimation, and the residual that closes the kill chain.
01
Time-of-flight points, radar bursts, SAR phase histories — sensor outputs in their native form, before any interpretation.
02
Spatial indexing converts unstructured returns into a regular 3D lattice — the substrate every downstream model operates on.
03
Domain-tuned 3D object detectors classify each occupied region: vehicle, vessel, UAS, structure. Bounded, scored, ranked.
04
Associations across frames yield persistent tracks — kinematics, intent estimation, and the residual that closes the kill chain.
Capabilities
Each grounded in published methods, tuned for the conditions our domain actually presents.
LiDAR, FMCW radar, SAR, and EO/IR ingested through a unified spatiotemporal pipeline. We treat each modality as evidence, not as ground truth — fusion is calibrated to the environment, not assumed.
tag: fusion · pipeline · calibration
Domain-tuned 3D object detectors built on PointPillars-, CenterPoint-, and transformer-based families. We fine-tune against the operational context our deployments will see — maritime, littoral, austere.
tag: ATR · 3D detection · fine-tuning
Models compiled for on-platform inference under DDIL conditions: limited bandwidth, intermittent compute, contested EM. Quantized, latency-budgeted, fail-graceful by design.
tag: edge · DDIL · latency-budgeted
Modalities
Our pipelines speak each sensor in its native dialect, then translate. No modality assumed, none privileged.
Spatial substrate
Every track our models emit is referenced against a continuously-updated geospatial layer — building footprints, road networks, terrain meshes — built from open and licensed sources by our sister project, Spatial Data Systems Research.
Learn about SDSR— Mission
Close the gap between sensor and decision.
Sensors emit volume. Operators need signal. We build the layer in between.
Built for
These are the operating conditions that shape our model architectures, training data, and deployment targets.
Sensor-to-decision under DDIL — denied, degraded, intermittent, limited. Inference happens on-platform; the network is not assumed.
Multi-modal detection at standoff. Small targets, low signature, against a noisy clutter background — the detection problem, hardest case.
Persistent observation across radar, SAR, and EO/IR. Tracks survive horizon transitions, weather, and adversarial RF environments.
Contact
Working on a sensor-intelligence problem that doesn't fit a vendor catalog? So are we.