FAQ

  • You specialize in Cortex-M4, M33, and M7. Can you work with other microcontrollers?

    While our deepest expertise lies within the modern Cortex-M family, our principles of resource-constrained ML are applicable to other platforms. We have worked with older ARM architectures and other MCU families (e.g., ESP32, STM8). The key is a thorough evaluation of the target's capabilities. If your project involves a different MCU, let's discuss the specifics.

  • What is the typical memory and flash footprint for one of your models?

    This is highly dependent on the complexity of the task. However, our primary goal is minimalism. For binary classification or anomaly detection, we aim for models in the range of 10-100 KB of flash and a few dozen KB of RAM. We use techniques like 8-bit quantization to radically reduce the footprint while preserving performance.

  • Will you need access to our product's full source code?

    Not necessarily. Our goal is a symbiotic integration, not a takeover. We typically deliver our solution as a statically linked library or C source files with a clear API. You will need to handle the sensor data input and integrate our function calls into your main loop or RTOS task. We only need to understand the hardware abstraction layer for the relevant sensors, not your entire application logic.

  • What exactly is the final deliverable?

    The final deliverable is code, tailored for your specific hardware. This typically includes the optimized model as a C-array, the inference engine library, and example code demonstrating how to call the model and interpret its output. We ensure it compiles with your toolchain (e.g., GCC, ARM Keil) and provide support during the integration phase.

  • Why not just use a more powerful Cortex-A processor or a small Linux-based system?

    Because your product is likely already designed, deployed, and cost-optimized. Our entire value proposition is built around augmenting the hardware you already have, avoiding the immense cost, time, and risk associated with a full redesign. We provide a path to intelligence that respects your existing investment and power budget.

  • We have a very unique, proprietary sensor. Can you build a model for its data?

    Absolutely. We thrive on unique data challenges. Our "Field Data Acquisition & Model Training" service is designed for exactly this scenario. We help you build the pipeline to collect data from your proprietary sensor, and then we train a custom model that learns the specific patterns and nuances of your system.

  • What is the average timeline for a project?

    A typical project, from initial technical dive to final library delivery, can range from 4 to 12 weeks. This timeline is influenced by factors like data availability, model complexity, and the level of integration support required.

  • Who owns the intellectual property (IP) of the final, trained model?

    You do. While Daugava Labs retains ownership of our underlying inference libraries and proprietary tools, the custom-trained model developed using your data is your intellectual property. This is always clearly defined in our Statement of Work (SOW).