API Reference
Quantum Cognition Machine Learning (QCML) is a quantum-inspired machine learning framework that represents data points as quantum states in a complex Hilbert space. Each observation \(\mathbf{x}_t \in \mathbb{R}^K\) is mapped to a quantum state \(|\psi_t\rangle\) by finding the ground state of an error Hamiltonian.
where \(A_k\) are Hermitian operators (quantum observables) corresponding to each feature, and \(I\) is the identity matrix. The ground state \(|\psi_t\rangle\) therefore encodes the data as a quantum state.
For supervised learning, the target variables \(y_1, \ldots, y_n\) are each assigned quantum forecast observables \(B_1, \ldots, B_n\), and the outputs of the model corresponding to input \(\mathbf{x}_t\) are given by:
that is, the expectation values of the measurement of \(B_i\) in the state \(|\psi_t\rangle\).
The model parameters \(A_k\) and \(B_i\) are optimized to minimize a loss function \(\mathcal{L}(\hat{y}_{i,t}, y_{i,t})\), such as mean absolute error or cross-entropy (resp. for regression or classification). This quantum approach enables context-aware feature modeling, dimensionality reduction, and robust generalization, especially in high-dimensional, low-sample-size settings.
For further reading, see References section below.
QCMLRegressor
- class honeio.integrations.QCMLRegressor(*, hilbert_space_dim=8, epochs=1000, random_state=0, lr=0.1, weights_lr=None, opt_betas=None, loss='L1', groups=None, device='cpu', batch_size=None, dropout_rate=0.0)[source]
Bases:
QCMLBase
,RegressorMixin
Scikit-learn wrapper for QCML Regression models.
- loss_fn: MSELoss | L1Loss | SmoothL1Loss
- __init__(*, hilbert_space_dim=8, epochs=1000, random_state=0, lr=0.1, weights_lr=None, opt_betas=None, loss='L1', groups=None, device='cpu', batch_size=None, dropout_rate=0.0)[source]
Initialize the QCMLRegressor.
- Parameters:
hilbert_space_dim (int, optional) – The dimension of the Hilbert space, by default 8
epochs (int, optional) – The number of epochs for training, by default 1000
random_state (int, optional) – The random seed, by default 0
lr (float, optional) – The learning rate, by default 0.1
weights_lr (float, optional) – The learning rate for the weight layer, by default None which will use the same learning rate as lr.
opt_betas (tuple[float, float], optional) – The betas for the optimizer, by default None which will use (0.9, 0.999)
loss (str, optional) – The loss function to use, by default ‘L1’ Options: ‘L1’, ‘L2’, ‘SmoothL1’
groups (list[list[int]] | None, optional) – The indices of groups of input features that should be assigned the same weight in weight layer. This is a list of lists, where each list contains the indices of the inputs in that group. This can be useful for one hot encoded features where you may want to assign same weight to all categories. If None, all input weights are learned independently. By default, None
device (str, optional) – The device to use for training, by default ‘cpu’
batch_size (int, optional) – The batch size for training, by default None. If None or -1, no batching is performed.
dropout_rate (float, optional) – The dropout rate for the model, by default 0.0. If 0.0, no dropout is applied.
- fit(X, y)[source]
Fit Method.
- Xnp.ndarray
The input features.
- ynp.ndarray
The target values.
- Returns:
The fitted model.
- Return type:
- predict(X)[source]
Predict Method.
- Parameters:
X (np.ndarray) – The input features.
- Returns:
The predicted values.
- Return type:
np.ndarray
- classmethod load(load_states_fn)[source]
Load the model.
This class method instantiate a new QCMLRegressor object and loads the states of the model using a load_states_fn.
You can use the load_states_pickle function from the hooks module as an example of how to implement a load_states_fn.
- Parameters:
load_states_fn (LoadStatesFn) – Function to load the states of the model.
- Returns:
The loaded model.
- Return type:
- set_score_request(*, sample_weight='$UNCHANGED$')
Configure whether metadata should be requested to be passed to the
score
method.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config()
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
QCMLClassifier
- class honeio.integrations.QCMLClassifier(*, hilbert_space_dim=8, epochs=1000, random_state=0, lr=0.1, weights_lr=None, opt_betas=None, loss='cross_entropy', groups=None, device='cpu', batch_size=None, dropout_rate=0.0)[source]
Bases:
QCMLBase
,ClassifierMixin
Scikit-learn wrapper for QCML Classification models.
- loss_fn: CrossEntropyLoss
- __init__(*, hilbert_space_dim=8, epochs=1000, random_state=0, lr=0.1, weights_lr=None, opt_betas=None, loss='cross_entropy', groups=None, device='cpu', batch_size=None, dropout_rate=0.0)[source]
Initialize the QCMLClassifier.
- Parameters:
hilbert_space_dim (int, optional) – The dimension of the Hilbert space, by default 8
epochs (int, optional) – The number of epochs for training, by default 1000
random_state (int, optional) – The random seed, by default 0
lr (float, optional) – The learning rate, by default 0.1
weights_lr (float, optional) – The learning rate for the weight layer, by default None which will use the same learning rate as lr.
opt_betas (tuple[float, float], optional) – The betas for the optimizer, by default None which will use (0.9, 0.999)
loss (str, optional) – The loss function to use, by default ‘cross_entropy’
groups (list[list[int]] | None, optional) – The indices of groups of input features that should be assigned the same weight in weight layer. This is a list of lists, where each list contains the indices of the inputs in that group. This can be useful for one hot encoded features where you may want to assign same weight to all categories. If None, all input weights are learned independently. By default, None
device (str, optional) – The device to use for training, by default ‘cpu’
batch_size (int, optional) – The batch size for training, by default None. If None or -1, no batching is performed.
dropout_rate (float, optional) – The dropout rate for the model, by default 0.0. If 0.0, no dropout is applied.
- fit(X, y, classes=None)[source]
Fit Method.
- Xnp.ndarray
The input features.
- ynp.ndarray
The target values.
- classeslist | None, optional
List of possible classes for the target values. If None, the classes will be inferred from the target values. By default, None
- Returns:
The fitted model.
- Return type:
- predict_proba(X)[source]
Predict Method.
- Parameters:
X (np.ndarray) – The input features.
- Returns:
The predicted values.
- Return type:
np.ndarray
- predict(X)[source]
Predict Method.
- Parameters:
X (np.ndarray) – The input features.
- Returns:
The predicted values.
- Return type:
np.ndarray
- set_fit_request(*, classes='$UNCHANGED$')
Configure whether metadata should be requested to be passed to the
fit
method.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config()
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- set_score_request(*, sample_weight='$UNCHANGED$')
Configure whether metadata should be requested to be passed to the
score
method.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config()
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- classmethod load(load_states_fn)[source]
Load the model.
This class method instantiate a new QCMLClassifier object and loads the states of the model using a load_states_fn.
You can use the load_states_pickle function from the hooks module as an example of how to implement a load_states_fn.
- Parameters:
load_states_fn (LoadStatesFn) – Function to load the states of the model.
- Returns:
The loaded model.
- Return type:
References
Candelori, A.G. Abanov, J. Berger, C.J. Hogan, V. Kirakosyan, K. Musaelian, R. Samson, J.E.T. Smith, D. Villani, M.T. Wells, M. Xu. Robust Estimation of the intrinsic dimension of data sets with quantum cognition machine learning. Nature Scientific Reports, 2025. https://www.nature.com/articles/s41598-025-91676-8
Di Caro, V. Kirakosyan, A.G. Abanov, J.R. Busemeyer, L. Candelori, N. Hartmann, E.T. Lam, K. Musaelian, R. Samson, H. Steinacker, D. Villani, M.T. Wells, R.J. Wenstrup, M. Xu. Quantum Cognition Machine Learning for Forecasting Chromosomal Instability. arXiv:2506.03199. https://arxiv.org/abs/2506.03199