Software Development

Your Bridge to Ultra-Low-Power AI

Talamo is a complete Software Development Kit (SDK) for Edge AI development. It’s designed to seamlessly bridge familiar machine learning frameworks to the powerful ultra-low-power world of Spiking Neural Networks (SNNs).

SDK Background

Low-Code Solution

Spend less time writing boilerplate code and more time innovating. Our high-level API simplifies the entire workflow.

No Prior SNN Knowledge Required

You don't need to be an SNN expert. If you can build a model in PyTorch or Tensorflow, you can create a model for our hardware.

Easy Deployment & Compilation

Talamo handles the complex conversion and compilation, turning your existing models into efficient hardware-ready applications.

From Model to Deployment, Simplified

Application Logo
  • PyTorch Extension for Spiking Neural Networks (SNNs)

  • Spike Encoders & Decoders to transform numerical data into spikes

  • Modular application pipeline design

Application Logo

Convert End-to-End application into C Source Code

Application Logo

Standard MCU
Programming Workflow

Powered by Talamo SDK

Build your model journey with Talamo

Pulsar brings real-time, event-driven intelligence directly to your devices, enabling sub-millisecond responsiveness at microwatt power levels.

Build
  1. from talamo.encoders.c1 import IFEncoder
  2. from talamo.decoders import MaxRateDecoder
  3. from talamo.snn.containers import Snn
  4. from talamo.pipeline.elements import Pipeline, MFCC
  5. num_features = 32
  6. snn_model = MySNNModel()
  7. # End-to-End SNN pipeline including preprocessing
  8. pipe = Pipeline(
  9.     steps=[
  10.       MFCC(n_mfcc=num_features, n_fft=512, hop_length=512, n_mels=128, sample_rate=22050),
    # Mel-frequency cepstral coefficients preprocessing
  11.       IFEncoder(num_encoder_channels=num_features),
    # Integrate-and-fire encoder
  12.       Snn(snn_model),
  13.       MaxRateDecoder(),
  14.     ]
  15. )
Train
  1. import torch
  2. lr = 0.2
  3. batch_size = 128
  4. # Parameters for different pipeline steps can be trained
  5. optimizer = torch.optim.Adam([
  6.     {‘params’: snn_params, “lr”: lr},
  7.     {‘params’: encoder_params, “lr”: lr*2}], lr=lr)
  8. loss_fn = torch.nn.NLLLoss()
  9. # Built in training function or implement custom training loop
  10. pipe.fit(
  11.     dataset=train_dataset,
  12.     epochs=50,
  13.     dataloader_type=torch.utils.data.DataLoader,
  14.     dataloader_args={“batch_size”: batch_size, “shuffle”: True},
  15.     optimizer=optimizer,
  16.     loss_function=loss_fn,
  17.     verbose=2,
  18. )
Quantize
  1. from talamo.quantization
    import RoundAndClamp
  2. # Quantize using talamo quantizer
  3. weight_quantizer = RoundAndClamp(all_params=pipe.query_torch_params(“*.weight”), lower_limit=-128, upper_limit=127)
  4. pipe.quantize(quantizers=[weight_quantizer])
Deploy
  1. from talamo.device.c1
    import Soc
  2. innatera_soc = Soc()
  3. pipe.to(innatera_soc)
    # Deploy pipeline to Innatera Soc
  4. pipe.evaluate(dataset=test_dataset)
Powered by Talamo SDK

From Model to Deployment, Simplified

Model Deployment
CTA Image

Contact us

Based on your request type, please use the relevant form below to contact us. Please use the ‘Sales’ form if you are interested in a commercial engagement with Innatera.