A friend mentioned Elixir at dinner. At the time, we'd just about had enough building our charger connectivity layer in Java—wrestling with connection state and devices that each speak their own dialect of the OCPP protocol. "Have you looked at Erlang?" he asked. I went home that night, started reading, and fell down a rabbit hole. Not because the language itself was so dazzling, but because the problems it was designed around felt uncomfortably familiar.

At Voltra, we're building the software layer that connects and controls energy devices, starting with EV chargers over OCPP, and expanding further upstream to substations, factories, industrial sites, and more. OCPP is our proving ground: the place where we're building the foundation for what comes next.
The Problem: Building for Chaos
On paper, the Open Charge Point Protocol (OCPP) is a standard for how EV chargers communicate with management systems over persistent WebSocket connections. In practice, it's a suggestion that every manufacturer implements differently.
Here's an example. OCPP has no explicit message for "cable plugged in", which is a fairly important event, since a driver has physically connected and expects something to happen. The specification doesn't define it. So manufacturers infer it through different state combinations of a message called StatusNotification, use vendor-specific extensions, or make something up entirely. One manufacturer we onboarded sends a status of "Preparing" when the cable is plugged in, another sends "SuspendedEV”, and a third sends a custom vendor message that isn't in the specification at all, yet all three are technically compliant.
Certification unfortunately doesn't solve the problem alone, as it focuses on a specific snapshot in time. Manufacturers push firmware updates that quietly break compliance after certification, and there's no true mechanism to catch it.
This isn't unique to OCPP. Industrial protocols—SCADA, Modbus, SunSpec—are littered with ambiguity, legacy decisions, and implementations that diverge from their respective specifications. The entire energy and automation ecosystem runs on software designed by different committees and implemented by teams who never talk to each other. It is not sustainable to just build for the specifications themselves, you have to build for the chaos that encompasses them.
When we were working in Java, each of these quirks meant another layer of defensive code, another try-catch block, another special case where one manufacturer's behavior could cascade into a system-wide problem. This put us in a corner, spending more time building guardrails than building features.
Why the BEAM
Erlang was created by Ericsson in the 1980s to run telephone switches. This meant millions of simultaneous calls, each independent, any of which could fail without affecting the others. Ericsson's engineers built the BEAM virtual machine for this: each call gets its own lightweight process with isolated memory. Not an OS thread—a VM-level process that can be spawned by the millions with minimal overhead. If one fails, its supervisor restarts it. The rest don't notice. Elixir, built on the BEAM by José Valim, gives us this same foundation with modern ergonomics.

The problem Ericsson solved for phone calls is structurally identical to ours: thousands of long-lived connections to unreliable devices, any of which can fail at any time, none of which should take down the system.
Three properties of the BEAM made the difference for us:
Process isolation. No shared state, no mutexes, no possibility of one process corrupting another's memory. When a charger sends a malformed payload, its process crashes and its supervisor restarts it. The other connections are physically unaffected—not because we wrote good error handling, but because the VM makes cross-process corruption impossible. In Elixir, the philosophy is "let it crash." It sounds reckless until you understand that the isolation guarantee comes from the virtual machine, not from discipline.
Preemptive scheduling. Unlike Node.js or Python's asyncio, where one misbehaving connection can starve others, the BEAM guarantees CPU fairness at the VM level. No single charger can monopolize resources, regardless of what it sends.
Per-process garbage collection. No stop-the-world GC pauses. Each process collects independently, so a memory-heavy operation on one connection doesn't freeze the system.
How We Built It: One Process Per Charger
A traditional architecture for this problem would use stateless servers behind a load balancer, a database for state, and a message queue for async work. That works for request-response workloads, but it falls apart when you need thousands of persistent connections to devices that might send a message at any moment.
At Voltra, every charger gets its own GenServer process the moment it connects. A GenServer is an Elixir abstraction over a BEAM process—a long-running process that holds state and responds to messages. Each charger's process holds its state in memory: current status, active sessions, configuration, protocol version, and manufacturer-specific quirks. When the charger sends a message, its process handles it immediately—parse, validate, update state, respond. No database round-trip, no queue, no contention.
defmodule Voltra.Charger.Connection do
use GenServer
def handle_info({:ocpp, %StartTransaction{} = msg}, state) do
with {:ok, session} <- Sessions.authorize_and_start(msg, state.charger),
:ok <- Billing.begin_metering(session),
:ok <- Events.publish(:transaction_started, session) do
{:reply, %StartTransactionResponse{transaction_id: session.id},
%{state | active_session: session}}
end
end
end
This lets us authorize charging sessions in single-digit milliseconds because the state is already in memory. When we need to shed load across a site hitting peak demand, we fan out commands to dozens of chargers simultaneously with each process handling its command in parallel with zero coordination overhead.
But the bigger win is isolation. Remember the manufacturer sending a custom vendor message that wasn't in the spec? In Java, handling that edge case meant touching shared code paths that affected every charger. In Elixir, each charger's quirks live inside its own process. We write manufacturer-specific protocol adapters in isolated modules, test them against real hardware, and deploy without risking a single existing connection.
What This Unlocked
The architecture gives us three things we couldn't get in Java:
Fast manufacturer onboarding. Each new charger brand is a protocol adapter in its own module. We've gone from weeks of integration work to a couple of hours. The adapter runs in isolated processes, meaning if it has bugs, they can't infect the rest of the system.
Demand response without re-architecture. When a site needs to shed load, we fan out power limit commands to every charger process on that site simultaneously and processes adjust independently. There's no central coordinator bottleneck and no polling loop hitting a database.
Graceful degradation by default. When a manufacturer pushed a firmware update that started sending malformed payloads across their entire fleet, those charger processes crashed and their supervisors restarted them within milliseconds. Every other charger on the network continued operating without interruption. We didn't write special handling for it—the supervision tree did exactly what it was designed to do.
Lets Talk.
If you're working on device connectivity, EV charging & energy infrastructure, or just want to talk about building reliable systems, reach out at hello@voltra.com to learn more.

