![]() |
Signet Forge 0.1.0
C++20 Parquet library with AI-native extensions
|
DEMO |
A single ML inference event with full operational metadata. More...
#include <inference_log.hpp>
Public Member Functions | |
| std::vector< uint8_t > | serialize () const |
| Serialize the record to a deterministic byte sequence. | |
Static Public Member Functions | |
| static expected< InferenceRecord > | deserialize (const uint8_t *data, size_t size) |
| Reconstruct an InferenceRecord from its serialized byte representation. | |
Public Attributes | |
| int64_t | timestamp_ns {0} |
| Inference timestamp (nanoseconds since epoch) | |
| std::string | model_id |
| Model identifier (e.g., "gpt-4", "bert-base") | |
| std::string | model_version |
| Model version hash or checkpoint ID. | |
| InferenceType | inference_type {InferenceType::CLASSIFICATION} |
| Type of inference. | |
| std::vector< float > | input_embedding |
| Input embedding (optional, may be empty) | |
| std::string | input_hash |
| SHA-256 hash of raw input (for privacy) | |
| std::string | output_hash |
| SHA-256 hash of raw output. | |
| float | output_score {0.0f} |
| Primary output score/probability. | |
| int64_t | latency_ns {0} |
| Inference latency in nanoseconds. | |
| int32_t | batch_size {1} |
| Batch size. | |
| int32_t | input_tokens {0} |
| Input token count (LLM, 0 if N/A) | |
| int32_t | output_tokens {0} |
| Output token count (LLM, 0 if N/A) | |
| std::string | user_id_hash |
| Hashed user ID (for privacy) | |
| std::string | session_id |
| Session identifier. | |
| std::string | metadata_json |
| Additional JSON metadata. | |
| std::string | training_dataset_id |
| Training data identifier. | |
| int64_t | training_dataset_size {0} |
| Number of samples in training dataset. | |
| std::string | training_data_characteristics |
| Description of training data properties. | |
| int64_t | model_training_end_ns {0} |
| Timestamp when model training completed (EU AI Act Art.12) | |
| int64_t | model_training_data_cutoff_ns {0} |
| Latest data timestamp used in training. | |
| std::string | model_retraining_schedule |
| Cron or description of retraining schedule (EU AI Act Art.13) | |
A single ML inference event with full operational metadata.
Captures everything needed to audit and reproduce an inference: model identity, timing, resource usage, and privacy-preserving hashes of inputs and outputs. Raw data is never stored — only SHA-256 hashes — to comply with GDPR data minimization.
Definition at line 64 of file inference_log.hpp.
|
inlinestatic |
Reconstruct an InferenceRecord from its serialized byte representation.
| data | Pointer to the serialized bytes. |
| size | Number of bytes available at data. |
Definition at line 167 of file inference_log.hpp.
|
inline |
Serialize the record to a deterministic byte sequence.
Format: each field written sequentially as little-endian values. Strings: 4-byte LE length prefix + raw bytes. Vectors: 4-byte LE count + float data (4 bytes each, LE).
Definition at line 97 of file inference_log.hpp.
| int32_t signet::forge::InferenceRecord::batch_size {1} |
Batch size.
Definition at line 74 of file inference_log.hpp.
| InferenceType signet::forge::InferenceRecord::inference_type {InferenceType::CLASSIFICATION} |
Type of inference.
Definition at line 68 of file inference_log.hpp.
| std::vector<float> signet::forge::InferenceRecord::input_embedding |
Input embedding (optional, may be empty)
Definition at line 69 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::input_hash |
SHA-256 hash of raw input (for privacy)
Definition at line 70 of file inference_log.hpp.
| int32_t signet::forge::InferenceRecord::input_tokens {0} |
Input token count (LLM, 0 if N/A)
Definition at line 75 of file inference_log.hpp.
| int64_t signet::forge::InferenceRecord::latency_ns {0} |
Inference latency in nanoseconds.
Definition at line 73 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::metadata_json |
Additional JSON metadata.
Definition at line 79 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::model_id |
Model identifier (e.g., "gpt-4", "bert-base")
Definition at line 66 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::model_retraining_schedule |
Cron or description of retraining schedule (EU AI Act Art.13)
Definition at line 91 of file inference_log.hpp.
| int64_t signet::forge::InferenceRecord::model_training_data_cutoff_ns {0} |
Latest data timestamp used in training.
Definition at line 90 of file inference_log.hpp.
| int64_t signet::forge::InferenceRecord::model_training_end_ns {0} |
Timestamp when model training completed (EU AI Act Art.12)
Definition at line 89 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::model_version |
Model version hash or checkpoint ID.
Definition at line 67 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::output_hash |
SHA-256 hash of raw output.
Definition at line 71 of file inference_log.hpp.
| float signet::forge::InferenceRecord::output_score {0.0f} |
Primary output score/probability.
Definition at line 72 of file inference_log.hpp.
| int32_t signet::forge::InferenceRecord::output_tokens {0} |
Output token count (LLM, 0 if N/A)
Definition at line 76 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::session_id |
Session identifier.
Definition at line 78 of file inference_log.hpp.
| int64_t signet::forge::InferenceRecord::timestamp_ns {0} |
Inference timestamp (nanoseconds since epoch)
Definition at line 65 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::training_data_characteristics |
Description of training data properties.
Definition at line 86 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::training_dataset_id |
Training data identifier.
Definition at line 84 of file inference_log.hpp.
| int64_t signet::forge::InferenceRecord::training_dataset_size {0} |
Number of samples in training dataset.
Definition at line 85 of file inference_log.hpp.
| std::string signet::forge::InferenceRecord::user_id_hash |
Hashed user ID (for privacy)
Definition at line 77 of file inference_log.hpp.