2.1.1.1.1.12. emicroml.modelling.cbed.distortion.estimation.MLDataset

class MLDataset(path_to_ml_dataset, entire_ml_dataset_is_to_be_cached=False, ml_data_values_are_to_be_checked=False, max_num_ml_data_instances_per_chunk=100, skip_validation_and_conversion=False)[source]

Bases: _MLDataset

A wrapper to the PyTorch dataset class torch.utils.data.Dataset.

The current class is a subclass of fancytypes.PreSerializableAndUpdatable.

The current class represents machine learning (ML) datasets that can be used to train and/or evaluate ML models represented by the class emicroml.modelling.cbed.distortion.estimation.MLModel.

Parameters:
path_to_ml_datasetstr, optional

The relative or absolute filename of the HDF5 file in which the ML dataset is stored. The input HDF5 file is assumed to have the same file structure as an HDF5 file generated by the function emicroml.modelling.cbed.distortion.estimation.generate_and_save_ml_dataset(). See the documentation of said function for a description of the file structure. Moreover, the input HDF5 file is assumed to have been created in a manner that is consistent with the way HDF5 files are generated by the function emicroml.modelling.cbed.distortion.estimation.generate_and_save_ml_dataset().

entire_ml_dataset_is_to_be_cachedbool, optional

If entire_ml_dataset_is_to_be_cached is set to True, then as long as there is sufficient memory, the entire ML dataset is read from the HDF5 file and cached in the instance of the current class, upon construction of said instance. In this case, method calls that access ML data instances do so via accessing the cached ML dataset. Otherwise, the entire ML dataset is not read and cached upon construction of the instance of the current class. In this case, method calls that access ML data instances do so via reading from the HDF5 file. The first scenario yields slower instance construction times, larger memory requirements, and faster ML dataset access post instance construction, compared to the second scenario. If the parameter ml_data_values_are_to_be_checked is set to True, then the construction times in the two aforementioned scenarios are comparable.

ml_data_values_are_to_be_checkedbool, optional

If ml_data_values_are_to_be_checked is set to True, then the data values of the relevant HDF5 datasets stored in the HDF5 file are checked, raising an exception if any data values are invalid. Otherwise, the data values are not checked.

max_num_ml_data_instances_per_chunkint | float("inf"), optional

If ml_data_values_are_to_be_checked is set to False, then max_num_ml_data_instances_per_chunk is effectively ignored. Otherwise, max_num_ml_data_instances_per_chunk specifies the maximum number of ML data instances to read from the HDF5 file at a time when validating the data values stored threrein.

skip_validation_and_conversionbool, optional

Let validation_and_conversion_funcs and core_attrs denote the attributes validation_and_conversion_funcs and core_attrs respectively, both of which being dict objects.

Let params_to_be_mapped_to_core_attrs denote the dict representation of the constructor parameters excluding the parameter skip_validation_and_conversion, where each dict key key is a different constructor parameter name, excluding the name "skip_validation_and_conversion", and params_to_be_mapped_to_core_attrs[key] would yield the value of the constructor parameter with the name given by key.

If skip_validation_and_conversion is set to False, then for each key key in params_to_be_mapped_to_core_attrs, core_attrs[key] is set to validation_and_conversion_funcs[key] (params_to_be_mapped_to_core_attrs).

Otherwise, if skip_validation_and_conversion is set to True, then core_attrs is set to params_to_be_mapped_to_core_attrs.copy(). This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values of params_to_be_mapped_to_core_attrs, as it is guaranteed that no copies or conversions are made in this case.

Attributes:
core_attrs

dict: The “core attributes”.

de_pre_serialization_funcs

dict: The de-pre-serialization functions.

max_num_disks_in_any_cbed_pattern

int: The maximum possible number of CBED disks in any imaged CBED

normalization_biases

dict: The normalization biases of the normalizable elements.

normalization_weights

dict: The normalization weights of the normalizable elements.

num_pixels_across_each_cbed_pattern

int: The number of pixels across each imaged CBED pattern stored

pre_serialization_funcs

dict: The pre-serialization functions.

validation_and_conversion_funcs

dict: The validation and conversion functions.

Methods

de_pre_serialize([serializable_rep, ...])

Construct an instance from a serializable representation.

dump([filename, overwrite])

Serialize instance and save the result in a JSON file.

dumps()

Serialize instance.

get_core_attrs([deep_copy])

Return the core attributes.

get_de_pre_serialization_funcs()

Return the de-pre-serialization functions.

get_ml_data_instances([single_dim_slice, ...])

Return a subset of the machine learning data instances as a dictionary.

get_ml_data_instances_as_signals([...])

Return a subset of the machine learning data instances as a sequence of Hyperspy signals.

get_pre_serialization_funcs()

Return the pre-serialization functions.

get_validation_and_conversion_funcs()

Return the validation and conversion functions.

load([filename, skip_validation_and_conversion])

Construct an instance from a serialized representation that is stored in a JSON file.

loads([serialized_rep, ...])

Construct an instance from a serialized representation.

pre_serialize()

Pre-serialize instance.

update(new_core_attr_subset_candidate[, ...])

Update a subset of the core attributes.

execute_post_core_attrs_update_actions

Methods

de_pre_serialize

Construct an instance from a serializable representation.

dump

Serialize instance and save the result in a JSON file.

dumps

Serialize instance.

execute_post_core_attrs_update_actions

get_core_attrs

Return the core attributes.

get_de_pre_serialization_funcs

Return the de-pre-serialization functions.

get_ml_data_instances

Return a subset of the machine learning data instances as a dictionary.

get_ml_data_instances_as_signals

Return a subset of the machine learning data instances as a sequence of Hyperspy signals.

get_pre_serialization_funcs

Return the pre-serialization functions.

get_validation_and_conversion_funcs

Return the validation and conversion functions.

load

Construct an instance from a serialized representation that is stored in a JSON file.

loads

Construct an instance from a serialized representation.

pre_serialize

Pre-serialize instance.

update

Update a subset of the core attributes.

Attributes

core_attrs

dict: The "core attributes".

de_pre_serialization_funcs

dict: The de-pre-serialization functions.

max_num_disks_in_any_cbed_pattern

int: The maximum possible number of CBED disks in any imaged CBED pattern stored in the machine learning dataset.

normalization_biases

dict: The normalization biases of the normalizable elements.

normalization_weights

dict: The normalization weights of the normalizable elements.

num_pixels_across_each_cbed_pattern

int: The number of pixels across each imaged CBED pattern stored in the machine learning dataset.

pre_serialization_funcs

dict: The pre-serialization functions.

validation_and_conversion_funcs

dict: The validation and conversion functions.

property core_attrs

dict: The “core attributes”.

The keys of core_attrs are the same as the attribute validation_and_conversion_funcs, which is also a dict object.

Note that core_attrs should be considered read-only.

property de_pre_serialization_funcs

dict: The de-pre-serialization functions.

de_pre_serialization_funcs has the same keys as the attribute validation_and_conversion_funcs, which is also a dict object.

Let validation_and_conversion_funcs and pre_serialization_funcs denote the attributes validation_and_conversion_funcs pre_serialization_funcs respectively, the last of which being a dict object as well.

Let core_attrs_candidate_1 be any dict object that has the same keys as validation_and_conversion_funcs, where for each dict key key in core_attrs_candidate_1, validation_and_conversion_funcs[key](core_attrs_candidate_1) does not raise an exception.

Let serializable_rep be a dict object that has the same keys as core_attrs_candidate_1, where for each dict key key in core_attrs_candidate_1, serializable_rep[key] is set to pre_serialization_funcs[key](core_attrs_candidate_1[key]).

The items of de_pre_serialization_funcs are expected to be set to callable objects that would lead to de_pre_serialization_funcs[key](serializable_rep[key]) not raising an exception for each dict key key in serializable_rep.

Let core_attrs_candidate_2 be a dict object that has the same keys as serializable_rep, where for each dict key key in validation_and_conversion_funcs, core_attrs_candidate_2[key] is set to de_pre_serialization_funcs[key](serializable_rep[key]).

The items of de_pre_serialization_funcs are also expected to be set to callable objects that would lead to validation_and_conversion_funcs[key](core_attrs_candidate_2) not raising an exception for each dict key key in core_attrs_candidate_2.

Note that de_pre_serialization_funcs should be considered read-only.

classmethod de_pre_serialize(serializable_rep={}, skip_validation_and_conversion=False)

Construct an instance from a serializable representation.

Parameters:
serializable_repdict, optional

A dict object that has the same keys as the attribute validation_and_conversion_funcs, which is also a dict object.

Let validation_and_conversion_funcs and de_pre_serialization_funcs denote the attributes validation_and_conversion_funcs de_pre_serialization_funcs respectively, the last of which being a dict object as well.

The items of serializable_rep are expected to be objects that would lead to de_pre_serialization_funcs[key](serializable_rep[key]) not raising an exception for each dict key key in serializable_rep.

Let core_attrs_candidate be a dict object that has the same keys as serializable_rep, where for each dict key key in serializable_rep, core_attrs_candidate[key] is set to de_pre_serialization_funcs[key](serializable_rep[key])``.

The items of serializable_rep are also expected to be set to objects that would lead to validation_and_conversion_funcs[key](core_attrs_candidate) not raising an exception for each dict key key in serializable_rep.

skip_validation_and_conversionbool, optional

Let core_attrs denote the attribute core_attrs, which is a dict object.

If skip_validation_and_conversion is set to False, then for each key key in serializable_rep, core_attrs[key] is set to validation_and_conversion_funcs[key] (core_attrs_candidate), with validation_and_conversion_funcs and core_attrs_candidate_1 being introduced in the above description of serializable_rep.

Otherwise, if skip_validation_and_conversion is set to True, then core_attrs is set to core_attrs_candidate.copy(). This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values of core_attrs_candidate, as it is guaranteed that no copies or conversions are made in this case.

Returns:
instance_of_current_clsCurrent class

An instance constructed from the serializable representation serializable_rep.

dump(filename='serialized_rep_of_fancytype.json', overwrite=False)

Serialize instance and save the result in a JSON file.

Parameters:
filenamestr, optional

The relative or absolute path to the JSON file in which to store the serialized representation of an instance.

overwritebool, optional

If overwrite is set to False and a file exists at the path filename, then the serialized instance is not written to that file and an exception is raised. Otherwise, the serialized instance will be written to that file barring no other issues occur.

Returns:
dumps()

Serialize instance.

Returns:
serialized_repdict

A serialized representation of an instance.

get_core_attrs(deep_copy=True)

Return the core attributes.

Parameters:
deep_copybool, optional

Let core_attrs denote the attribute core_attrs, which is a dict object.

If deep_copy is set to True, then a deep copy of core_attrs is returned. Otherwise, a shallow copy of core_attrs is returned.

Returns:
core_attrsdict

The attribute core_attrs.

classmethod get_de_pre_serialization_funcs()

Return the de-pre-serialization functions.

Returns:
de_pre_serialization_funcsdict

The attribute de_pre_serialization_funcs.

get_ml_data_instances(single_dim_slice=0, device_name=None, decode=False, unnormalize_normalizable_elems=False)

Return a subset of the machine learning data instances as a dictionary.

This method returns a subset of the machine learning (ML) data instances of the ML dataset as a dictionary ml_data_instances. Each dict key in ml_data_instances is the name of a feature of the subset of the ML data instances, and the value corresponding to the dict key is a PyTorch tensor storing the values of the feature of the subset of ML data instances. The name of any feature is a string that stores the HDF5 path to the HDF5 dataset storing the values of said feature of the ML dataset.

Parameters:
single_dim_sliceslice, optional

single_dim_slice specifies the subset of ML data instances to return as a dictionary. The ML data instances are indexed from 0 to total_num_ml_data_instances-1, where total_num_ml_data_instances is the total number of ML data instances in the ML dataset. tuple(range(total_num_ml_data_instances))[single_dim_slice] yields the indices ml_data_instance_subset_indices of the ML data instances to return.

device_namestr | None, optional

This parameter specifies the device to be used to store the data of the PyTorch tensors. If device_name is a string, then it is the name of the device to be used, e.g. ”cuda” or ”cpu”. If device_name is set to None and a GPU device is available, then a GPU device is to be used. Otherwise, the CPU is used.

decodebool, optional

Specifies whether or not the subset of ML data instances are to be decoded. Generally speaking, some features of the subset of ML data instances may be encoded, implying that the values of said features are not currently directly present in whatever representation, be it a dictionary representation, an HDF5 file representation, or something else. However, the values of these features can be decoded, i.e. reconstructed from other features. If decode is set to True, then any features that have been encoded will be decoded, and will be present in the dictionary representation of the subset of ML data instances. Otherwise, any features that have been encoded will not be decoded, and will not be present in the dictionary representation.

unnormalize_normalizable_elemsbool, optional

In emicroml, the non-decoded normalizable features of ML datasets stored in HDF5 files are expected to be normalized via a linear transformation such that the minimum and maximum values of such features lie within the closed interval \([0, 1]\).

If unnormalize_normalizable_elems is set to True, then the dictionary representation of the subset of ML data instances will store the unnormalized values of the normalizable features. Otherwise, the dictionary representation of the subset of ML data instances will store the normalized values of the normalizable features, which lie within the closed interval of \([0, 1]\).

Returns:
ml_data_instancesdict

The subset of ML data instances, represented as a dictionary. Let key be the dict key of ml_data_instances specifying one of the features of the subset of the ML data instances. Let num_ml_data_instances_in_subset be len(ml_data_instances[key]). For every nonnegative integer n less than num_ml_data_instances_in_subset, then ml_data_instances[key][n] yields the value of the feature specified by key of ML data instance with the index ml_data_instance_subset_indices[n].

get_ml_data_instances_as_signals(single_dim_slice=0, device_name=None, sampling_grid_dims_in_pixels=(512, 512), least_squares_alg_params=None)

Return a subset of the machine learning data instances as a sequence of Hyperspy signals.

See the documentation for the classes fakecbed.discretized.CBEDPattern, distoptica.DistortionModel, and hyperspy._signals.signal2d.Signal2D for discussions on “fake” CBED patterns, distortion models, and Hyperspy signals respectively.

For each machine learning (ML) data instance in the subset, an instance distortion_model of the class distoptica.DistortionModel is constructed according to the ML data instance’s features. The object distortion_model is a distortion model that describes the distortion field of the imaged CBED pattern of the ML data instance. After constructing distortion_model, an instance fakecbed.discretized.CBEDPattern is constructed according to the ML data instance’s features and distortion_model. fake_cbed_pattern is a fake CBED pattern representation of the CBED pattern of the ML data instance. Next, a Hyperspy signal fake_cbed_pattern_signal is obtained from fake_cbed_pattern.signal. The Hyperspy signal representation of the ML data instance is obtained by modifying in place fake_cbed_pattern_signal.data[1:3] according to the ML data instance’s features. Note that the illumination support of the fake CBED pattern representation of the CBED pattern of the ML data instance is inferred from the features of the ML data instance, and is stored in fake_cbed_pattern_signal.data[1]. Moreover, the illumination suport implied by the signal’s metadata should be ignored.

Parameters:
single_dim_sliceslice, optional

single_dim_slice specifies the subset of ML data instances to return as a dictionary. The ML data instances are indexed from 0 to total_num_ml_data_instances-1, where total_num_ml_data_instances is the total number of ML data instances in the ML dataset. tuple(range(total_num_ml_data_instances))[single_dim_slice] yields the indices ml_data_instance_subset_indices of the ML data instances to return.

device_namestr | None, optional

This parameter specifies the device to be used to perform computationally intensive calls to PyTorch functions and to store intermediate arrays of the type torch.Tensor. If device_name is a string, then it is the name of the device to be used, e.g. ”cuda” or ”cpu”. If device_name is set to None and a GPU device is available, then a GPU device is to be used. Otherwise, the CPU is used.

sampling_grid_dims_in_pixelsarray_like (int, shape=(2,)), optional

The dimensions of the sampling grid, in units of pixels, used for all distortion models.

least_squares_alg_paramsdistoptica.LeastSquaresAlgParams | None, optional

least_squares_alg_params specifies the parameters of the least-squares algorithm to be used to calculate the mappings of fractional Cartesian coordinates of distorted images to those of the corresponding undistorted images. least_squares_alg_params is used to calculate the interim distortion models mentioned above in the summary documentation. If least_squares_alg_params is set to None, then the parameter will be reassigned to the value distoptica.LeastSquaresAlgParams(). See the documentation for the class distoptica.LeastSquaresAlgParams for details on the parameters of the least-squares algorithm.

Returns:
ml_data_instances_as_signalsarray_like (hyperspy._signals.signal2d.Signal2D, ndim=1)

The subset of ML data instances, represented as a sequence of Hyperspy signals. Let num_ml_data_instances_in_subset be len(ml_data_instances_as_signals). For every nonnegative integer n less than num_ml_data_instances_in_subset, then ml_data_instances_as_signals[n] yields the Hyperspy signal representation of the ML data instance with the index ml_data_instance_subset_indices[n].

classmethod get_pre_serialization_funcs()

Return the pre-serialization functions.

Returns:
pre_serialization_funcsdict

The attribute pre_serialization_funcs.

classmethod get_validation_and_conversion_funcs()

Return the validation and conversion functions.

Returns:
validation_and_conversion_funcsdict

The attribute validation_and_conversion_funcs.

classmethod load(filename='serialized_rep_of_fancytype.json', skip_validation_and_conversion=False)

Construct an instance from a serialized representation that is stored in a JSON file.

Users can save serialized representations to JSON files using the method fancytypes.PreSerializable.dump().

Parameters:
filenamestr, optional

The relative or absolute path to the JSON file that is storing the serialized representation of an instance.

filename is expected to be such that json.load(open(filename, "r")) does not raise an exception.

Let serializable_rep=json.load(open(filename, "r")).

Let validation_and_conversion_funcs and de_pre_serialization_funcs denote the attributes validation_and_conversion_funcs de_pre_serialization_funcs respectively, both of which being dict objects as well.

filename is also expected to be such that de_pre_serialization_funcs[key](serializable_rep[key]) does not raise an exception for each dict key key in de_pre_serialization_funcs.

Let core_attrs_candidate be a dict object that has the same keys as de_pre_serialization_funcs, where for each dict key key in serializable_rep, core_attrs_candidate[key] is set to de_pre_serialization_funcs[key](serializable_rep[key])``.

filename is also expected to be such that validation_and_conversion_funcs[key](core_attrs_candidate) does not raise an exception for each dict key key in serializable_rep.

skip_validation_and_conversionbool, optional

Let core_attrs denote the attribute core_attrs, which is a dict object.

Let core_attrs_candidate be as defined in the above description of filename.

If skip_validation_and_conversion is set to False, then for each key key in core_attrs_candidate, core_attrs[key] is set to validation_and_conversion_funcs[key] (core_attrs_candidate), , with validation_and_conversion_funcs and core_attrs_candidate being introduced in the above description of filename.

Otherwise, if skip_validation_and_conversion is set to True, then core_attrs is set to core_attrs_candidate.copy(). This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values of core_attrs_candidate, as it is guaranteed that no copies or conversions are made in this case.

Returns:
instance_of_current_clsCurrent class

An instance constructed from the serialized representation stored in the JSON file.

classmethod loads(serialized_rep='{}', skip_validation_and_conversion=False)

Construct an instance from a serialized representation.

Users can generate serialized representations using the method dumps().

Parameters:
serialized_repstr | bytes | bytearray, optional

The serialized representation.

serialized_rep is expected to be such that json.loads(serialized_rep) does not raise an exception.

Let serializable_rep=json.loads(serialized_rep).

Let validation_and_conversion_funcs and de_pre_serialization_funcs denote the attributes validation_and_conversion_funcs de_pre_serialization_funcs respectively, both of which being dict objects as well.

serialized_rep is also expected to be such that de_pre_serialization_funcs[key](serializable_rep[key]) does not raise an exception for each dict key key in de_pre_serialization_funcs.

Let core_attrs_candidate be a dict object that has the same keys as serializable_rep, where for each dict key key in de_pre_serialization_funcs, core_attrs_candidate[key] is set to de_pre_serialization_funcs[key](serializable_rep[key])``.

serialized_rep is also expected to be such that validation_and_conversion_funcs[key](core_attrs_candidate) does not raise an exception for each dict key key in serializable_rep.

skip_validation_and_conversionbool, optional

Let core_attrs denote the attribute core_attrs, which is a dict object.

If skip_validation_and_conversion is set to False, then for each key key in core_attrs_candidate, core_attrs[key] is set to validation_and_conversion_funcs[key] (core_attrs_candidate), with validation_and_conversion_funcs and core_attrs_candidate_1 being introduced in the above description of serialized_rep.

Otherwise, if skip_validation_and_conversion is set to True, then core_attrs is set to core_attrs_candidate.copy(). This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values of core_attrs_candidate, as it is guaranteed that no copies or conversions are made in this case.

Returns:
instance_of_current_clsCurrent class

An instance constructed from the serialized representation.

property max_num_disks_in_any_cbed_pattern

int: The maximum possible number of CBED disks in any imaged CBED pattern stored in the machine learning dataset.

Note that max_num_disks_in_any_cbed_pattern should be considered read-only.

property normalization_biases

dict: The normalization biases of the normalizable elements.

Generally speaking, a machine learning (ML) data instance contains one or more features, and can be grouped into two different categories: normalizable and unnormalizable features.

In emicroml, the non-decoded normalizable features of ML datasets stored in HDF5 files are expected to be normalized via a linear transformation such that the minimum and maximum values of such features lie within the closed interval \([0, 1]\).

Let unnormalized_values be the unnormalized values of a normalizable feature in a ML dataset. The normalization is performed by

normalized_values = (unnormalized_values*normalization_weight
                     + normalization_bias)

where normalized_values are the normalized values, normalization_weight is a valid normalization weight, and normalization_bias is a valid noramlization bias.

The current attribute stores the normalization biases of the normalizable features in the ML dataset. Each dict key in normalization_biases is the name of a normalizable feature, and the value corresponding to the dict key is the value of the normalization bias of said normalizable feature. The name of any feature is a string that stores the HDF5 path to the HDF5 dataset storing the values of said feature of the ML dataset.

Note that normalization_biases should be considered read-only.

property normalization_weights

dict: The normalization weights of the normalizable elements.

Generally speaking, a machine learning (ML) data instance contains one or more features, and can be grouped into two different categories: normalizable and unnormalizable features.

In emicroml, the non-decoded normalizable features of ML datasets stored in HDF5 files are expected to be normalized via a linear transformation such that the minimum and maximum values of such features lie within the closed interval \([0, 1]\).

Let unnormalized_values be the unnormalized values of a normalizable feature in a ML dataset. The normalization is performed by

normalized_values = (unnormalized_values*normalization_weight
                     + normalization_bias)

where normalized_values are the normalized values, normalization_weight is a valid normalization weight, and normalization_bias is a valid noramlization bias.

The current attribute stores the normalization weights of the normalizable features in the ML dataset. Each dict key in normalization_weights is the name of a normalizable feature, and the value corresponding to the dict key is the value of the normalization weight of said normalizable feature. The name of any feature is a string that stores the HDF5 path to the HDF5 dataset storing the values of said feature of the ML dataset.

Note that normalization_weights should be considered read-only.

property num_pixels_across_each_cbed_pattern

int: The number of pixels across each imaged CBED pattern stored in the machine learning dataset.

Note that num_pixels_across_each_cbed_pattern should be considered read-only.

property pre_serialization_funcs

dict: The pre-serialization functions.

pre_serialization_funcs has the same keys as the attribute validation_and_conversion_funcs, which is also a dict object.

Let validation_and_conversion_funcs and core_attrs denote the attributes validation_and_conversion_funcs and core_attrs respectively, the last of which being a dict object as well.

For each dict key key in core_attrs, pre_serialization_funcs[key](core_attrs[key]) is expected to yield a serializable object, i.e. it should yield an object that can be passed into the function json.dumps without raising an exception.

Note that pre_serialization_funcs should be considered read-only.

pre_serialize()

Pre-serialize instance.

Returns:
serializable_repdict

A serializable representation of an instance.

update(new_core_attr_subset_candidate, skip_validation_and_conversion=False)

Update a subset of the core attributes.

Parameters:
new_core_attr_subset_candidatedict, optional

A dict object.

skip_validation_and_conversionbool, optional

Let validation_and_conversion_funcs and core_attrs denote the attributes validation_and_conversion_funcs and core_attrs respectively, both of which being dict objects.

If skip_validation_and_conversion is set to False, then for each key key in core_attrs that is also in new_core_attr_subset_candidate, core_attrs[key] is set to validation_and_conversion_funcs[key] (new_core_attr_subset_candidate).

Otherwise, if skip_validation_and_conversion is set to True, then for each key key in core_attrs that is also in new_core_attr_subset_candidate, core_attrs[key] is set to new_core_attr_subset_candidate[key]. This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values of new_core_attr_subset_candidate, as it is guaranteed that no copies or conversions are made in this case.

property validation_and_conversion_funcs

dict: The validation and conversion functions.

The keys of validation_and_conversion_funcs are the names of the constructor parameters, excluding skip_validation_and_conversion if it exists as a construction parameter.

Let core_attrs denote the attribute core_attrs, which is also a dict object.

For each dict key key in core_attrs, validation_and_conversion_funcs[key](core_attrs) is expected to not raise an exception.

Note that validation_and_conversion_funcs should be considered read-only.