Here are the most asked questions we couldn’t cover live.
How can manufacturers deploy AI in air-gapped or offline OT environments?
There are two main approaches.
1- Run on-prem LLMs: Deploy your model of choice (e.g., Ollama) within your data center on GPU-based infrastructure such as HCI or localized cloud platforms like NVIDIA DGX Spark. This keeps the system air-gapped while allowing it to scale.
2- Fully localized AI: Run Litmus Edge with your model as a container directly on a GPU-enabled server (e.g., Dell, DGX Spark). This provides a completely self-contained, secure AI deployment at the OT layer.
Does Litmus integrate with PI Historian and AF?
Yes. Litmus offers a native PI Historian connector that extracts data through the SDK. Context can then be added to align with AF structures. Many organizations prefer not to rebuild AF hierarchies, so Litmus provides modeling tools to replicate or import those frameworks directly into Litmus Edge
Does Litmus support embedding AI at the edge, not just in the cloud?
Absolutely. Think of the architecture as a data flow, not a fixed stack. Many organizations begin with cloud AI for validation, then move to the edge for scale and real-time performance using the same data pipeline.
How does Litmus integrate with existing point solutions? Should customers replace them?
No replacement required. Litmus complements existing tools by serving as a modern data foundation—standardizing data, replacing legacy protocol converters, and exposing data through modern APIs (REST, GraphQL) and models (OPC UA, Sparkplug B, UNS). It enhances interoperability and security without disrupting what’s already working.
Should contextualization happen at the edge or in the cloud?
As close to the data source as possible. Doing so improves data integrity, reduces cloud storage and compute costs, shortens feedback loops, and keeps sensitive IP secure on-prem
Does Litmus help reduce cloud ingress and data transfer costs?
Yes. By filtering, aggregating, and contextualizing data at the edge, Litmus reduces unnecessary payloads to the cloud. Its high-performance broker supports wildcarding and consolidation to minimize message volume and cost.
Is it good practice to have an OT data pipeline separate from ERP or IT systems?
Yes. ERP data is transactional, while OT telemetry is event-driven. Both must be joined in a contextualized model to enable AI. Treating data as a core asset ensures OT data collection isn’t an afterthought—and makes ERP + OT integration far smoother.
How does Litmus complement Ignition?
Ignition excels at building operator-facing apps—visualizations, data entry, and controls. Litmus Edge provides the scalable, standardized data foundation that Ignition consumes, enabling enterprise-wide deployment and management.
Can Litmus support hybrid analytics, like training in the cloud and inference at the edge?
Yes, Litmus enables hybrid AI strategies: train and refine models in the cloud, then deploy them to the edge for inference to reduce latency and cost. Containers and CI/CD pipelines simplify this process.
How does Litmus scale and govern AI models across sites?
Litmus Edge and Edge Manager support container-based model deployment with versioning and CI/CD pipelines. GitHub integration (in development) will further automate model updates, governance, and retraining at enterprise scale.