Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Python Analysis Integration

AutoCore provides a dedicated external module, autocore-python, which securely runs Python scripts alongside your control program.

The primary use-case for this integration is post-test analysis. Rather than forcing automation engineers to write complex mathematical analysis (like calculating averages, detecting peaks, or analyzing trace arrays) in Rust, you can offload this math to Python. This allows customers, technicians, or data scientists to safely modify the analysis math without recompiling the machine’s control program.

Architecture Concept: The Pure Calculator

In AutoCore, Python acts purely as a “Calculator”. It does not control the machine, read sensors directly, or manage system state.

Instead, the workflow is entirely driven by your Rust Control Program:

sequenceDiagram
    participant Rust as Control Program (Rust)
    participant Disk as AutoCore Datastore
    participant Py as autocore-python

    Rust->>Disk: 1. Test finishes. Rust saves traces & data to disk.
    Rust->>Py: 2. Rust requests Python analysis via IPC.
    Py->>Disk: 3. Python reads the saved test data.
    Py->>Py: 4. Python executes your script.
    Py-->>Rust: 5. Python returns the calculated results.
    Rust->>Disk: 6. Rust saves the final results.

Key Benefits of this approach:

  1. Safety: The Python interpreter runs in a completely separate process. If a customer writes a script that crashes or loops infinitely, it will never lock up the control program or crash the machine.
  2. Stateless: The Python script starts with a “clean slate” on every single run. There is no leftover memory or state between tests, preventing hidden bugs.
  3. Familiarity: Python receives standard dictionaries and lists, exactly as they appear in standard Python data science workflows.

Enabling the Python Module

Because autocore-python is an external module, you must explicitly enable it in your project.json file before your control program can talk to it.

Add the following block to the "modules" section of your project.json:

"modules": {
  ...
  "python": {
    "enabled": true,
    "executable": "autocore-python",
    "config": {
      "datastore_directory": "/srv/autocore/datastore"
    }
  }
}

The datastore_directory tells the Python module where to look for the results/ folder (where the saved test data lives) and the scripts/ folder (where your .py files live). If omitted, it defaults to ./datastore relative to wherever the server is running.

Writing the Python Script

Python scripts are stored in the datastore/scripts/ directory.

To write an analysis script, you only need to define a single function. This function must accept exactly one argument: a dictionary called ctx (short for “context”).

The ctx Dictionary

The autocore-python module automatically bundles everything about the current test run into this single ctx dictionary. It looks like this:

# The structure of the `ctx` argument
ctx = {
    # Data from your test setup screen
    "project": { "customer": "ACME Corp", "operator": "John Doe" },
    "configuration": { "control_load": 500.0, "speed": 10.0 },
    
    # Existing results (if any have been saved so far)
    "results": { "test_duration_s": 45.2 },
    
    # An array of all cycle data recorded during the test
    "cycle_history": [
        { "cycle_index": 1, "actual_load": 498.2 },
        { "cycle_index": 2, "actual_load": 501.1 }
    ],
    
    # Any raw trace data (like force over time arrays) 
    # saved using set_raw_trace()
    "raw_data": {
        "trace": { 
            "t": [0.0, 0.1, 0.2, 0.3], 
            "force": [0.0, 100.5, 200.1, 300.8] 
        }
    },
    
    # Information identifying the test
    "run_id": "2026-04-20T12-00-00Z",
    
    # Any custom data the Rust program explicitly sent
    "custom_params": {} 
}

Example Analysis Script

Here is an example script (datastore/scripts/analyze_traction.py) that calculates the maximum and average force from a raw data trace:

def calculate_results(ctx):
    # 1. Extract the data we need from the ctx dictionary
    # We use .get() to prevent crashes if the trace data is missing
    raw_data = ctx.get("raw_data", {})
    trace = raw_data.get("trace", {})
    
    force_array = trace.get("force", [])
    
    # If there is no force data, return zeros
    if len(force_array) == 0:
        return {
            "max_force": 0.0,
            "avg_force": 0.0
        }
    
    # 2. Perform the mathematical analysis
    max_force = max(force_array)
    avg_force = sum(force_array) / len(force_array)
    
    # 3. Return a dictionary of the calculated results.
    # The keys in this dictionary should perfectly match the 
    # `results_fields` declared in your project.json!
    return {
        "max_force": max_force,
        "avg_force": avg_force
    }

Triggering Analysis from the Control Program

To trigger the Python script, your Rust program uses ctx.client.send(...) to send a request to the autocore-python module.

Because autocore-python must read the test files from the disk, you must ensure the test data is written to disk before calling Python.

Implementation Example

Assume you have a simple state machine. When the test finishes, you call set_raw_trace to write the high-speed data to the disk, and immediately trigger Python.

use autocore_std::{CommandClient, TickContext};
use crate::generated_results::{ResultsSystem, TestType};
use serde_json::json;

pub struct MyProgram {
    results: ResultsSystem,
    state: State,
    python_request_id: Option<u32>, // Store the transaction ID to wait for the reply
    
    // Test data
    trace_t: Vec<f32>,
    trace_force: Vec<f32>,
}

enum State { Running, Calculating, Idle }

impl MyProgram {
    pub fn tick(&mut self, ctx: &mut TickContext<GlobalMemory>) {
        self.results.tick(&mut ctx.client);

        match self.state {
            State::Running => {
                // ... Accumulate data ...
                
                if ctx.gm.test_finished {
                    // 1. Write the raw trace data to disk immediately
                    self.results.translational_traction.set_raw_trace(&self.trace_t, &self.trace_force, ctx);
                    
                    // 2. We need the Project ID, Definition ID, and Run ID to tell
                    // Python where to find the files on the disk. 
                    let project_id = ctx.gm.results_active_project_id.as_str();
                    let def_id = ctx.gm.results_active_definition_id.as_str();
                    let run_id = ctx.gm.results_active_run_id.as_str();

                    // 3. Send the analysis request to autocore-python
                    let payload = json!({
                        "script": "analyze_traction.py",
                        "function": "calculate_results",
                        "project_id": project_id,
                        "definition_id": def_id,
                        "run_id": run_id,
                        "custom_params": {} // Optional extra data
                    });
                    
                    // Send the message and store the Transaction ID
                    let tid = ctx.client.send("python.run_analysis", payload);
                    self.python_request_id = Some(tid);
                    
                    self.state = State::Calculating;
                }
            }

            State::Calculating => {
                // We are waiting for Python to finish running.
                if let Some(tid) = self.python_request_id {
                    
                    // Check if a response has arrived for our specific transaction ID
                    if let Some(response) = ctx.client.take_response(tid) {
                        
                        if response.success {
                            // Success! Extract the dictionary returned by Python
                            let python_results = response.data;
                            
                            // Map the python output to your final result variables
                            let max_f = python_results["max_force"].as_f64().unwrap_or(0.0) as f32;
                            let avg_f = python_results["avg_force"].as_f64().unwrap_or(0.0) as f32;
                            
                            // 4. Update the final results system
                            self.results.translational_traction.update_results(max_f, avg_f, ctx);
                        } else {
                            log::error!("Python script failed: {}", response.error_message);
                            // Always close the test even on failure to prevent hanging
                            self.results.translational_traction.update_results(0.0, 0.0, ctx);
                        }
                        
                        // Clear the active test
                        self.results.active_test = None;
                        self.python_request_id = None;
                        self.state = State::Idle;
                    }
                }
            }
            
            State::Idle => { /* Waiting for next test... */ }
        }
    }
}

Best Practices

  • Never loop forever in Python: If your Python script contains an infinite loop (while True:), the autocore-python process will get stuck. Your control program handles this safely (it just stays in the Calculating state and your machine doesn’t crash), but the operator will be stuck waiting for results.
  • Fail gracefully: If an operator forgets to connect a sensor, your data arrays might be empty. Always check array lengths in Python before calling functions like max() or sum() to prevent unhandled script crashes.
  • Use custom_params for dynamic thresholds: If you have dynamic “Pass/Fail” limits set by the operator on the HMI (but not tracked by the Test Information System schema directly), you can inject them directly into the custom_params block in Rust so Python can check them.