Talamo SDK

Your Bridge to Ultra-Low-Power AI

Talamo is our complete Software Development Kit (SDK) for Edge AI development. It’s designed to be your easy-to-use bridge from familiar machine learning frameworks to the powerful, ultra-low-power world of Spiking Neural Networks (SNNs).

With Talamo, you can leverage the groundbreaking efficiency of neuromorphic hardware without a steep learning curve.

SDK Background

Low-Code Solution

Spend less time writing boilerplate code and more time innovating. Our high-level API simplifies the entire workflow.

No Prior SNN Knowledge Required

You don't need to be an SNN expert. If you can build a model in PyTorch or Tensorflow, you can create a model for our hardware.

Easy Deployment & Compilation

Talamo handles the complex conversion and compilation, turning your existing models into efficient hardware-ready applications.

Applications & Industry Fit

Innatera’s technology is already powering applications.

Model optimization tools
  • Prune parameters and activations
  • Quantization aware training
  • Custom RNN layers
Model optimization tools
Femtocrux compiler
Femtocrux compiler
Powered by Talamo SDK

Build your model journey with Talamo

Pulsar brings real-time, event-driven intelligence directly to your devices, enabling sub-millisecond responsiveness at microwatt power levels.

Quantize
Simulate energy, latency, throughput, and footprint of your model for rapid model development
  1. from femtodriver import Femtodriver
  2. # Simulate power consumption, memory footprint and latency of your model;
  3. with Femtodriver() as fd:
  4. meta_dir, femtofile_path, femtofile_size = fd.compile(fqir_graph)
  5. metrics = fd.simulate(
  6. input_period # Time between frames to estimate leakage
  7. input_file # The file to run the simulation on
  8. )
Simulate
Simulate energy, latency, throughput, and footprint of your model for rapid model development
  1. from femtodriver import Femtodriver
  2. # Simulate power consumption, memory footprint and latency of your model;
  3. with Femtodriver() as fd:
  4. meta_dir, femtofile_path, femtofile_size = fd.compile(fqir_graph)
  5. metrics = fd.simulate(
  6. input_period # Time between frames to estimate leakage
  7. input_file # The file to run the simulation on
  8. )
Deploy
Simulate energy, latency, throughput, and footprint of your model for rapid model development
  1. from femtodriver import Femtodriver
  2. # Simulate power consumption, memory footprint and latency of your model;
  3. with Femtodriver() as fd:
  4. meta_dir, femtofile_path, femtofile_size = fd.compile(fqir_graph)
  5. metrics = fd.simulate(
  6. input_period # Time between frames to estimate leakage
  7. input_file # The file to run the simulation on
  8. )
Sparsify
Prune model to sparse representation that SPU is uniquely designed to accelerate
  1. # Optional: Sparsify your model during training
  2. for batch in train_dataloader:
  3. pruner.prune(model)
  4. optimizer.zero_grad()
  5. outputs = model(batch)
  6. loss = loss_fn(outputs, batch)
  7. loss.backward()
  8. optimizer.step()
Quantization Aware Training
Run quantization-aware training to maintain high model performance despite tiny scale
  1. # Optional: Sparsify your model during training
  2. for batch in train_dataloader:
  3. pruner.prune(model)
  4. optimizer.zero_grad()
  5. outputs = model(batch)
  6. loss = loss_fn(outputs, batch)
  7. loss.backward()
  8. optimizer.step()
CTA Image

Contact us

Based on your request type, please use the relevant form below to contact us. Please use the ‘Sales’ form if you are interested in a commercial engagement with Innatera.