Skip to content

— Layered Combat Intelligence Center

Close the gapbetween sensorand decision.

LCIC trains and deploys machine-learning models on LiDAR, radar, SAR, and EO/IR returns — turning raw sensor data into actionable intelligence at the tactical edge.

v0.1 / 2026

— How it works

From returns to recognition.

Every detection traverses four states. Our models compress that path so decisions happen in milliseconds, not minutes.

24 KT · 074° VEHICLE / MID-SIZE CONF 0.94

01 · STATE

Raw returns

Time-of-flight points, radar bursts, SAR phase histories — sensor outputs in their native form, before any interpretation.

02 · STATE

Voxel grid

Spatial indexing converts unstructured returns into a regular 3D lattice — the substrate every downstream model operates on.

03 · STATE

Detection

Domain-tuned 3D object detectors classify each occupied region: vehicle, vessel, UAS, structure. Bounded, scored, ranked.

04 · STATE

Track

Associations across frames yield persistent tracks — kinematics, intent estimation, and the residual that closes the kill chain.

01

Raw returns

Time-of-flight points, radar bursts, SAR phase histories — sensor outputs in their native form, before any interpretation.

02

Voxel grid

Spatial indexing converts unstructured returns into a regular 3D lattice — the substrate every downstream model operates on.

03

Detection

Domain-tuned 3D object detectors classify each occupied region: vehicle, vessel, UAS, structure. Bounded, scored, ranked.

04

Track

Associations across frames yield persistent tracks — kinematics, intent estimation, and the residual that closes the kill chain.

Capabilities

Three pillars.

Each grounded in published methods, tuned for the conditions our domain actually presents.

  • 01

    Multimodal sensor fusion

    LiDAR, FMCW radar, SAR, and EO/IR ingested through a unified spatiotemporal pipeline. We treat each modality as evidence, not as ground truth — fusion is calibrated to the environment, not assumed.

    tag: fusion · pipeline · calibration

  • 02

    Detection & ATR

    Domain-tuned 3D object detectors built on PointPillars-, CenterPoint-, and transformer-based families. We fine-tune against the operational context our deployments will see — maritime, littoral, austere.

    tag: ATR · 3D detection · fine-tuning

  • 03

    Tactical-edge deployment

    Models compiled for on-platform inference under DDIL conditions: limited bandwidth, intermittent compute, contested EM. Quantized, latency-budgeted, fail-graceful by design.

    tag: edge · DDIL · latency-budgeted

Modalities

Every return tells a different story.

Our pipelines speak each sensor in its native dialect, then translate. No modality assumed, none privileged.

01 / 5

LiDAR

Time-of-flight 3D

Point clouds at 10–20 Hz, voxelized for downstream detection. Dense returns near, sparse at standoff — every point has a confidence.

  • PTS/FRAME ~120K
  • RANGE 200 m
  • RES 5 cm

02 / 5

Radar

mmWave / FMCW

All-weather, day-and-night. Doppler resolves motion before vision can; classification follows the velocity signature.

  • BAND 77 GHz
  • DOPPLER ±60 m/s
  • AZ 120°

03 / 5

SAR

Synthetic aperture

Strip-map and spotlight imagery decoded for object signatures invariant to time of day. Phase history, not pixel.

  • MODE STRIP
  • POL HH/VV
  • GSD 0.3 m

04 / 5

EO / IR

Visible + thermal

Multispectral framing for cueing, recognition, and identification. Calibrated against the LiDAR voxel grid for accurate fusion.

  • BAND VIS+LWIR
  • FPS 60
  • IFOV 0.5 mrad

05 / 5

Fused

Cross-modal output

A single track per object, attributed to its strongest evidence. Decisions don't care which sensor saw it first.

  • TRACKS PERSISTENT
  • LATENCY < 200 ms
  • CONF CALIBRATED

Spatial substrate

Detections need a world to live in.

Every track our models emit is referenced against a continuously-updated geospatial layer — building footprints, road networks, terrain meshes — built from open and licensed sources by our sister project, Spatial Data Systems Research.

Learn about SDSR
OSM / 2026 9B+ Geometries indexed Building footprints, roads, terrain features.
OSM EDITS / MO 300M+ Edits / month Continuous updates from the OSM contributor base.
PIPELINE 5 Sensor modalities LiDAR · radar · SAR · EO · IR.
EPSG 4326 / 3857 / 3395 Projections Coordinate systems handled natively.

— Mission

Close the gap between sensor and decision.

Sensors emit volume. Operators need signal. We build the layer in between.

Built for

Mission contexts we design against.

These are the operating conditions that shape our model architectures, training data, and deployment targets.

  • 01

    Tactical edge

    Sensor-to-decision under DDIL — denied, degraded, intermittent, limited. Inference happens on-platform; the network is not assumed.

  • 02

    C-UAS & wide-area surveillance

    Multi-modal detection at standoff. Small targets, low signature, against a noisy clutter background — the detection problem, hardest case.

  • 03

    Maritime & littoral domain awareness

    Persistent observation across radar, SAR, and EO/IR. Tracks survive horizon transitions, weather, and adversarial RF environments.

Contact

Get in touch.

Working on a sensor-intelligence problem that doesn't fit a vendor catalog? So are we.

Or email support@lcic.xyz directly.