Your Bridge to Ultra-Low-Power AI
Talamo is our complete Software Development Kit (SDK) for Edge AI development. It’s designed to be your easy-to-use bridge from familiar machine learning frameworks to the powerful, ultra-low-power world of Spiking Neural Networks (SNNs).
With Talamo, you can leverage the groundbreaking efficiency of neuromorphic hardware without a steep learning curve.
Low-Code Solution
Spend less time writing boilerplate code and more time innovating. Our high-level API simplifies the entire workflow.
No Prior SNN Knowledge Required
You don't need to be an SNN expert. If you can build a model in PyTorch or Tensorflow, you can create a model for our hardware.
Easy Deployment & Compilation
Talamo handles the complex conversion and compilation, turning your existing models into efficient hardware-ready applications.
Innatera’s technology is already powering applications.
- Prune parameters and activations
- Quantization aware training
- Custom RNN layers
Build your model journey with Talamo
Pulsar brings real-time, event-driven intelligence directly to your devices, enabling sub-millisecond responsiveness at microwatt power levels.
- from femtodriver import Femtodriver
- # Simulate power consumption, memory footprint and latency of your model;
- with Femtodriver() as fd:
- meta_dir, femtofile_path, femtofile_size = fd.compile(fqir_graph)
- metrics = fd.simulate(
- input_period # Time between frames to estimate leakage
- input_file # The file to run the simulation on
- )
- from femtodriver import Femtodriver
- # Simulate power consumption, memory footprint and latency of your model;
- with Femtodriver() as fd:
- meta_dir, femtofile_path, femtofile_size = fd.compile(fqir_graph)
- metrics = fd.simulate(
- input_period # Time between frames to estimate leakage
- input_file # The file to run the simulation on
- )
- from femtodriver import Femtodriver
- # Simulate power consumption, memory footprint and latency of your model;
- with Femtodriver() as fd:
- meta_dir, femtofile_path, femtofile_size = fd.compile(fqir_graph)
- metrics = fd.simulate(
- input_period # Time between frames to estimate leakage
- input_file # The file to run the simulation on
- )
- # Optional: Sparsify your model during training
- for batch in train_dataloader:
- pruner.prune(model)
- optimizer.zero_grad()
- outputs = model(batch)
- loss = loss_fn(outputs, batch)
- loss.backward()
- optimizer.step()
- # Optional: Sparsify your model during training
- for batch in train_dataloader:
- pruner.prune(model)
- optimizer.zero_grad()
- outputs = model(batch)
- loss = loss_fn(outputs, batch)
- loss.backward()
- optimizer.step()
Contact us
Based on your request type, please use the relevant form below to contact us. Please use the ‘Sales’ form if you are interested in a commercial engagement with Innatera.