2.1.1.1.1.12. emicroml.modelling.cbed.distortion.estimation.MLDataset
- class MLDataset(path_to_ml_dataset, entire_ml_dataset_is_to_be_cached=False, ml_data_values_are_to_be_checked=False, max_num_ml_data_instances_per_chunk=100, skip_validation_and_conversion=False)[source]
Bases:
_MLDataset
A wrapper to the PyTorch dataset class
torch.utils.data.Dataset
.The current class is a subclass of
fancytypes.PreSerializableAndUpdatable
.The current class represents machine learning (ML) datasets that can be used to train and/or evaluate ML models represented by the class
emicroml.modelling.cbed.distortion.estimation.MLModel
.- Parameters:
- path_to_ml_datasetstr, optional
The relative or absolute filename of the HDF5 file in which the ML dataset is stored. The input HDF5 file is assumed to have the same file structure as an HDF5 file generated by the function
emicroml.modelling.cbed.distortion.estimation.generate_and_save_ml_dataset()
. See the documentation of said function for a description of the file structure. Moreover, the input HDF5 file is assumed to have been created in a manner that is consistent with the way HDF5 files are generated by the functionemicroml.modelling.cbed.distortion.estimation.generate_and_save_ml_dataset()
.- entire_ml_dataset_is_to_be_cachedbool, optional
If
entire_ml_dataset_is_to_be_cached
is set toTrue
, then as long as there is sufficient memory, the entire ML dataset is read from the HDF5 file and cached in the instance of the current class, upon construction of said instance. In this case, method calls that access ML data instances do so via accessing the cached ML dataset. Otherwise, the entire ML dataset is not read and cached upon construction of the instance of the current class. In this case, method calls that access ML data instances do so via reading from the HDF5 file. The first scenario yields slower instance construction times, larger memory requirements, and faster ML dataset access post instance construction, compared to the second scenario. If the parameterml_data_values_are_to_be_checked
is set toTrue
, then the construction times in the two aforementioned scenarios are comparable.- ml_data_values_are_to_be_checkedbool, optional
If
ml_data_values_are_to_be_checked
is set toTrue
, then the data values of the relevant HDF5 datasets stored in the HDF5 file are checked, raising an exception if any data values are invalid. Otherwise, the data values are not checked.- max_num_ml_data_instances_per_chunkint |
float("inf")
, optional If
ml_data_values_are_to_be_checked
is set toFalse
, thenmax_num_ml_data_instances_per_chunk
is effectively ignored. Otherwise,max_num_ml_data_instances_per_chunk
specifies the maximum number of ML data instances to read from the HDF5 file at a time when validating the data values stored threrein.- skip_validation_and_conversionbool, optional
Let
validation_and_conversion_funcs
andcore_attrs
denote the attributesvalidation_and_conversion_funcs
andcore_attrs
respectively, both of which being dict objects.Let
params_to_be_mapped_to_core_attrs
denote the dict representation of the constructor parameters excluding the parameterskip_validation_and_conversion
, where each dict keykey
is a different constructor parameter name, excluding the name"skip_validation_and_conversion"
, andparams_to_be_mapped_to_core_attrs[key]
would yield the value of the constructor parameter with the name given bykey
.If
skip_validation_and_conversion
is set toFalse
, then for each keykey
inparams_to_be_mapped_to_core_attrs
,core_attrs[key]
is set tovalidation_and_conversion_funcs[key] (params_to_be_mapped_to_core_attrs)
.Otherwise, if
skip_validation_and_conversion
is set toTrue
, thencore_attrs
is set toparams_to_be_mapped_to_core_attrs.copy()
. This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values ofparams_to_be_mapped_to_core_attrs
, as it is guaranteed that no copies or conversions are made in this case.
- Attributes:
core_attrs
dict: The “core attributes”.
de_pre_serialization_funcs
dict: The de-pre-serialization functions.
max_num_disks_in_any_cbed_pattern
int: The maximum possible number of CBED disks in any imaged CBED
normalization_biases
dict: The normalization biases of the normalizable elements.
normalization_weights
dict: The normalization weights of the normalizable elements.
num_pixels_across_each_cbed_pattern
int: The number of pixels across each imaged CBED pattern stored
pre_serialization_funcs
dict: The pre-serialization functions.
validation_and_conversion_funcs
dict: The validation and conversion functions.
Methods
de_pre_serialize
([serializable_rep, ...])Construct an instance from a serializable representation.
dump
([filename, overwrite])Serialize instance and save the result in a JSON file.
dumps
()Serialize instance.
get_core_attrs
([deep_copy])Return the core attributes.
Return the de-pre-serialization functions.
get_ml_data_instances
([single_dim_slice, ...])Return a subset of the machine learning data instances as a dictionary.
Return a subset of the machine learning data instances as a sequence of Hyperspy signals.
Return the pre-serialization functions.
Return the validation and conversion functions.
load
([filename, skip_validation_and_conversion])Construct an instance from a serialized representation that is stored in a JSON file.
loads
([serialized_rep, ...])Construct an instance from a serialized representation.
Pre-serialize instance.
update
(new_core_attr_subset_candidate[, ...])Update a subset of the core attributes.
execute_post_core_attrs_update_actions
Methods
Construct an instance from a serializable representation.
Serialize instance and save the result in a JSON file.
Serialize instance.
execute_post_core_attrs_update_actions
Return the core attributes.
Return the de-pre-serialization functions.
Return a subset of the machine learning data instances as a dictionary.
Return a subset of the machine learning data instances as a sequence of Hyperspy signals.
Return the pre-serialization functions.
Return the validation and conversion functions.
Construct an instance from a serialized representation that is stored in a JSON file.
Construct an instance from a serialized representation.
Pre-serialize instance.
Update a subset of the core attributes.
Attributes
dict: The "core attributes".
dict: The de-pre-serialization functions.
int: The maximum possible number of CBED disks in any imaged CBED pattern stored in the machine learning dataset.
dict: The normalization biases of the normalizable elements.
dict: The normalization weights of the normalizable elements.
int: The number of pixels across each imaged CBED pattern stored in the machine learning dataset.
dict: The pre-serialization functions.
dict: The validation and conversion functions.
- property core_attrs
dict: The “core attributes”.
The keys of
core_attrs
are the same as the attributevalidation_and_conversion_funcs
, which is also a dict object.Note that
core_attrs
should be considered read-only.
- property de_pre_serialization_funcs
dict: The de-pre-serialization functions.
de_pre_serialization_funcs
has the same keys as the attributevalidation_and_conversion_funcs
, which is also a dict object.Let
validation_and_conversion_funcs
andpre_serialization_funcs
denote the attributesvalidation_and_conversion_funcs
pre_serialization_funcs
respectively, the last of which being a dict object as well.Let
core_attrs_candidate_1
be any dict object that has the same keys asvalidation_and_conversion_funcs
, where for each dict keykey
incore_attrs_candidate_1
,validation_and_conversion_funcs[key](core_attrs_candidate_1)
does not raise an exception.Let
serializable_rep
be a dict object that has the same keys ascore_attrs_candidate_1
, where for each dict keykey
incore_attrs_candidate_1
,serializable_rep[key]
is set topre_serialization_funcs[key](core_attrs_candidate_1[key])
.The items of
de_pre_serialization_funcs
are expected to be set to callable objects that would lead tode_pre_serialization_funcs[key](serializable_rep[key])
not raising an exception for each dict keykey
inserializable_rep
.Let
core_attrs_candidate_2
be a dict object that has the same keys asserializable_rep
, where for each dict keykey
invalidation_and_conversion_funcs
,core_attrs_candidate_2[key]
is set tode_pre_serialization_funcs[key](serializable_rep[key])
.The items of
de_pre_serialization_funcs
are also expected to be set to callable objects that would lead tovalidation_and_conversion_funcs[key](core_attrs_candidate_2)
not raising an exception for each dict keykey
incore_attrs_candidate_2
.Note that
de_pre_serialization_funcs
should be considered read-only.
- classmethod de_pre_serialize(serializable_rep={}, skip_validation_and_conversion=False)
Construct an instance from a serializable representation.
- Parameters:
- serializable_repdict, optional
A dict object that has the same keys as the attribute
validation_and_conversion_funcs
, which is also a dict object.Let
validation_and_conversion_funcs
andde_pre_serialization_funcs
denote the attributesvalidation_and_conversion_funcs
de_pre_serialization_funcs
respectively, the last of which being a dict object as well.The items of
serializable_rep
are expected to be objects that would lead tode_pre_serialization_funcs[key](serializable_rep[key])
not raising an exception for each dict keykey
inserializable_rep
.Let
core_attrs_candidate
be a dict object that has the same keys asserializable_rep
, where for each dict keykey
inserializable_rep
,core_attrs_candidate[key]
is set to de_pre_serialization_funcs[key](serializable_rep[key])``.The items of
serializable_rep
are also expected to be set to objects that would lead tovalidation_and_conversion_funcs[key](core_attrs_candidate)
not raising an exception for each dict keykey
inserializable_rep
.- skip_validation_and_conversionbool, optional
Let
core_attrs
denote the attributecore_attrs
, which is a dict object.If
skip_validation_and_conversion
is set toFalse
, then for each keykey
inserializable_rep
,core_attrs[key]
is set tovalidation_and_conversion_funcs[key] (core_attrs_candidate)
, withvalidation_and_conversion_funcs
andcore_attrs_candidate_1
being introduced in the above description ofserializable_rep
.Otherwise, if
skip_validation_and_conversion
is set toTrue
, thencore_attrs
is set tocore_attrs_candidate.copy()
. This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values ofcore_attrs_candidate
, as it is guaranteed that no copies or conversions are made in this case.
- Returns:
- instance_of_current_clsCurrent class
An instance constructed from the serializable representation
serializable_rep
.
- dump(filename='serialized_rep_of_fancytype.json', overwrite=False)
Serialize instance and save the result in a JSON file.
- Parameters:
- filenamestr, optional
The relative or absolute path to the JSON file in which to store the serialized representation of an instance.
- overwritebool, optional
If
overwrite
is set toFalse
and a file exists at the pathfilename
, then the serialized instance is not written to that file and an exception is raised. Otherwise, the serialized instance will be written to that file barring no other issues occur.
- Returns:
- dumps()
Serialize instance.
- Returns:
- serialized_repdict
A serialized representation of an instance.
- get_core_attrs(deep_copy=True)
Return the core attributes.
- Parameters:
- deep_copybool, optional
Let
core_attrs
denote the attributecore_attrs
, which is a dict object.If
deep_copy
is set toTrue
, then a deep copy ofcore_attrs
is returned. Otherwise, a shallow copy ofcore_attrs
is returned.
- Returns:
- core_attrsdict
The attribute
core_attrs
.
- classmethod get_de_pre_serialization_funcs()
Return the de-pre-serialization functions.
- Returns:
- de_pre_serialization_funcsdict
The attribute
de_pre_serialization_funcs
.
- get_ml_data_instances(single_dim_slice=0, device_name=None, decode=False, unnormalize_normalizable_elems=False)
Return a subset of the machine learning data instances as a dictionary.
This method returns a subset of the machine learning (ML) data instances of the ML dataset as a dictionary
ml_data_instances
. Each dict key inml_data_instances
is the name of a feature of the subset of the ML data instances, and the value corresponding to the dict key is a PyTorch tensor storing the values of the feature of the subset of ML data instances. The name of any feature is a string that stores the HDF5 path to the HDF5 dataset storing the values of said feature of the ML dataset.- Parameters:
- single_dim_sliceslice, optional
single_dim_slice
specifies the subset of ML data instances to return as a dictionary. The ML data instances are indexed from0
tototal_num_ml_data_instances-1
, wheretotal_num_ml_data_instances
is the total number of ML data instances in the ML dataset.tuple(range(total_num_ml_data_instances))[single_dim_slice]
yields the indicesml_data_instance_subset_indices
of the ML data instances to return.- device_namestr | None, optional
This parameter specifies the device to be used to store the data of the PyTorch tensors. If
device_name
is a string, then it is the name of the device to be used, e.g.”cuda”
or”cpu”
. Ifdevice_name
is set toNone
and a GPU device is available, then a GPU device is to be used. Otherwise, the CPU is used.- decodebool, optional
Specifies whether or not the subset of ML data instances are to be decoded. Generally speaking, some features of the subset of ML data instances may be encoded, implying that the values of said features are not currently directly present in whatever representation, be it a dictionary representation, an HDF5 file representation, or something else. However, the values of these features can be decoded, i.e. reconstructed from other features. If
decode
is set toTrue
, then any features that have been encoded will be decoded, and will be present in the dictionary representation of the subset of ML data instances. Otherwise, any features that have been encoded will not be decoded, and will not be present in the dictionary representation.- unnormalize_normalizable_elemsbool, optional
In
emicroml
, the non-decoded normalizable features of ML datasets stored in HDF5 files are expected to be normalized via a linear transformation such that the minimum and maximum values of such features lie within the closed interval \([0, 1]\).If
unnormalize_normalizable_elems
is set toTrue
, then the dictionary representation of the subset of ML data instances will store the unnormalized values of the normalizable features. Otherwise, the dictionary representation of the subset of ML data instances will store the normalized values of the normalizable features, which lie within the closed interval of \([0, 1]\).
- Returns:
- ml_data_instancesdict
The subset of ML data instances, represented as a dictionary. Let
key
be the dict key ofml_data_instances
specifying one of the features of the subset of the ML data instances. Letnum_ml_data_instances_in_subset
belen(ml_data_instances[key])
. For every nonnegative integern
less thannum_ml_data_instances_in_subset
, thenml_data_instances[key][n]
yields the value of the feature specified bykey
of ML data instance with the indexml_data_instance_subset_indices[n]
.
- get_ml_data_instances_as_signals(single_dim_slice=0, device_name=None, sampling_grid_dims_in_pixels=(512, 512), least_squares_alg_params=None)
Return a subset of the machine learning data instances as a sequence of Hyperspy signals.
See the documentation for the classes
fakecbed.discretized.CBEDPattern
,distoptica.DistortionModel
, andhyperspy._signals.signal2d.Signal2D
for discussions on “fake” CBED patterns, distortion models, and Hyperspy signals respectively.For each machine learning (ML) data instance in the subset, an instance
distortion_model
of the classdistoptica.DistortionModel
is constructed according to the ML data instance’s features. The objectdistortion_model
is a distortion model that describes the distortion field of the imaged CBED pattern of the ML data instance. After constructingdistortion_model
, an instancefakecbed.discretized.CBEDPattern
is constructed according to the ML data instance’s features anddistortion_model
.fake_cbed_pattern
is a fake CBED pattern representation of the CBED pattern of the ML data instance. Next, a Hyperspy signalfake_cbed_pattern_signal
is obtained fromfake_cbed_pattern.signal
. The Hyperspy signal representation of the ML data instance is obtained by modifying in placefake_cbed_pattern_signal.data[1:3]
according to the ML data instance’s features. Note that the illumination support of the fake CBED pattern representation of the CBED pattern of the ML data instance is inferred from the features of the ML data instance, and is stored infake_cbed_pattern_signal.data[1]
. Moreover, the illumination suport implied by the signal’s metadata should be ignored.- Parameters:
- single_dim_sliceslice, optional
single_dim_slice
specifies the subset of ML data instances to return as a dictionary. The ML data instances are indexed from0
tototal_num_ml_data_instances-1
, wheretotal_num_ml_data_instances
is the total number of ML data instances in the ML dataset.tuple(range(total_num_ml_data_instances))[single_dim_slice]
yields the indicesml_data_instance_subset_indices
of the ML data instances to return.- device_namestr | None, optional
This parameter specifies the device to be used to perform computationally intensive calls to PyTorch functions and to store intermediate arrays of the type
torch.Tensor
. Ifdevice_name
is a string, then it is the name of the device to be used, e.g.”cuda”
or”cpu”
. Ifdevice_name
is set toNone
and a GPU device is available, then a GPU device is to be used. Otherwise, the CPU is used.- sampling_grid_dims_in_pixelsarray_like (int, shape=(2,)), optional
The dimensions of the sampling grid, in units of pixels, used for all distortion models.
- least_squares_alg_params
distoptica.LeastSquaresAlgParams
| None, optional least_squares_alg_params
specifies the parameters of the least-squares algorithm to be used to calculate the mappings of fractional Cartesian coordinates of distorted images to those of the corresponding undistorted images.least_squares_alg_params
is used to calculate the interim distortion models mentioned above in the summary documentation. Ifleast_squares_alg_params
is set toNone
, then the parameter will be reassigned to the valuedistoptica.LeastSquaresAlgParams()
. See the documentation for the classdistoptica.LeastSquaresAlgParams
for details on the parameters of the least-squares algorithm.
- Returns:
- ml_data_instances_as_signalsarray_like (
hyperspy._signals.signal2d.Signal2D
, ndim=1) The subset of ML data instances, represented as a sequence of Hyperspy signals. Let
num_ml_data_instances_in_subset
belen(ml_data_instances_as_signals)
. For every nonnegative integern
less thannum_ml_data_instances_in_subset
, thenml_data_instances_as_signals[n]
yields the Hyperspy signal representation of the ML data instance with the indexml_data_instance_subset_indices[n]
.
- ml_data_instances_as_signalsarray_like (
- classmethod get_pre_serialization_funcs()
Return the pre-serialization functions.
- Returns:
- pre_serialization_funcsdict
The attribute
pre_serialization_funcs
.
- classmethod get_validation_and_conversion_funcs()
Return the validation and conversion functions.
- Returns:
- validation_and_conversion_funcsdict
The attribute
validation_and_conversion_funcs
.
- classmethod load(filename='serialized_rep_of_fancytype.json', skip_validation_and_conversion=False)
Construct an instance from a serialized representation that is stored in a JSON file.
Users can save serialized representations to JSON files using the method
fancytypes.PreSerializable.dump()
.- Parameters:
- filenamestr, optional
The relative or absolute path to the JSON file that is storing the serialized representation of an instance.
filename
is expected to be such thatjson.load(open(filename, "r"))
does not raise an exception.Let
serializable_rep=json.load(open(filename, "r"))
.Let
validation_and_conversion_funcs
andde_pre_serialization_funcs
denote the attributesvalidation_and_conversion_funcs
de_pre_serialization_funcs
respectively, both of which being dict objects as well.filename
is also expected to be such thatde_pre_serialization_funcs[key](serializable_rep[key])
does not raise an exception for each dict keykey
inde_pre_serialization_funcs
.Let
core_attrs_candidate
be a dict object that has the same keys asde_pre_serialization_funcs
, where for each dict keykey
inserializable_rep
,core_attrs_candidate[key]
is set to de_pre_serialization_funcs[key](serializable_rep[key])``.filename
is also expected to be such thatvalidation_and_conversion_funcs[key](core_attrs_candidate)
does not raise an exception for each dict keykey
inserializable_rep
.- skip_validation_and_conversionbool, optional
Let
core_attrs
denote the attributecore_attrs
, which is a dict object.Let
core_attrs_candidate
be as defined in the above description offilename
.If
skip_validation_and_conversion
is set toFalse
, then for each keykey
incore_attrs_candidate
,core_attrs[key]
is set tovalidation_and_conversion_funcs[key] (core_attrs_candidate)
, , withvalidation_and_conversion_funcs
andcore_attrs_candidate
being introduced in the above description offilename
.Otherwise, if
skip_validation_and_conversion
is set toTrue
, thencore_attrs
is set tocore_attrs_candidate.copy()
. This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values ofcore_attrs_candidate
, as it is guaranteed that no copies or conversions are made in this case.
- Returns:
- instance_of_current_clsCurrent class
An instance constructed from the serialized representation stored in the JSON file.
- classmethod loads(serialized_rep='{}', skip_validation_and_conversion=False)
Construct an instance from a serialized representation.
Users can generate serialized representations using the method
dumps()
.- Parameters:
- serialized_repstr | bytes | bytearray, optional
The serialized representation.
serialized_rep
is expected to be such thatjson.loads(serialized_rep)
does not raise an exception.Let
serializable_rep=json.loads(serialized_rep)
.Let
validation_and_conversion_funcs
andde_pre_serialization_funcs
denote the attributesvalidation_and_conversion_funcs
de_pre_serialization_funcs
respectively, both of which being dict objects as well.serialized_rep
is also expected to be such thatde_pre_serialization_funcs[key](serializable_rep[key])
does not raise an exception for each dict keykey
inde_pre_serialization_funcs
.Let
core_attrs_candidate
be a dict object that has the same keys asserializable_rep
, where for each dict keykey
inde_pre_serialization_funcs
,core_attrs_candidate[key]
is set to de_pre_serialization_funcs[key](serializable_rep[key])``.serialized_rep
is also expected to be such thatvalidation_and_conversion_funcs[key](core_attrs_candidate)
does not raise an exception for each dict keykey
inserializable_rep
.- skip_validation_and_conversionbool, optional
Let
core_attrs
denote the attributecore_attrs
, which is a dict object.If
skip_validation_and_conversion
is set toFalse
, then for each keykey
incore_attrs_candidate
,core_attrs[key]
is set tovalidation_and_conversion_funcs[key] (core_attrs_candidate)
, withvalidation_and_conversion_funcs
andcore_attrs_candidate_1
being introduced in the above description ofserialized_rep
.Otherwise, if
skip_validation_and_conversion
is set toTrue
, thencore_attrs
is set tocore_attrs_candidate.copy()
. This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values ofcore_attrs_candidate
, as it is guaranteed that no copies or conversions are made in this case.
- Returns:
- instance_of_current_clsCurrent class
An instance constructed from the serialized representation.
- property max_num_disks_in_any_cbed_pattern
int: The maximum possible number of CBED disks in any imaged CBED pattern stored in the machine learning dataset.
Note that
max_num_disks_in_any_cbed_pattern
should be considered read-only.
- property normalization_biases
dict: The normalization biases of the normalizable elements.
Generally speaking, a machine learning (ML) data instance contains one or more features, and can be grouped into two different categories: normalizable and unnormalizable features.
In
emicroml
, the non-decoded normalizable features of ML datasets stored in HDF5 files are expected to be normalized via a linear transformation such that the minimum and maximum values of such features lie within the closed interval \([0, 1]\).Let
unnormalized_values
be the unnormalized values of a normalizable feature in a ML dataset. The normalization is performed bynormalized_values = (unnormalized_values*normalization_weight + normalization_bias)
where
normalized_values
are the normalized values,normalization_weight
is a valid normalization weight, andnormalization_bias
is a valid noramlization bias.The current attribute stores the normalization biases of the normalizable features in the ML dataset. Each dict key in
normalization_biases
is the name of a normalizable feature, and the value corresponding to the dict key is the value of the normalization bias of said normalizable feature. The name of any feature is a string that stores the HDF5 path to the HDF5 dataset storing the values of said feature of the ML dataset.Note that
normalization_biases
should be considered read-only.
- property normalization_weights
dict: The normalization weights of the normalizable elements.
Generally speaking, a machine learning (ML) data instance contains one or more features, and can be grouped into two different categories: normalizable and unnormalizable features.
In
emicroml
, the non-decoded normalizable features of ML datasets stored in HDF5 files are expected to be normalized via a linear transformation such that the minimum and maximum values of such features lie within the closed interval \([0, 1]\).Let
unnormalized_values
be the unnormalized values of a normalizable feature in a ML dataset. The normalization is performed bynormalized_values = (unnormalized_values*normalization_weight + normalization_bias)
where
normalized_values
are the normalized values,normalization_weight
is a valid normalization weight, andnormalization_bias
is a valid noramlization bias.The current attribute stores the normalization weights of the normalizable features in the ML dataset. Each dict key in
normalization_weights
is the name of a normalizable feature, and the value corresponding to the dict key is the value of the normalization weight of said normalizable feature. The name of any feature is a string that stores the HDF5 path to the HDF5 dataset storing the values of said feature of the ML dataset.Note that
normalization_weights
should be considered read-only.
- property num_pixels_across_each_cbed_pattern
int: The number of pixels across each imaged CBED pattern stored in the machine learning dataset.
Note that
num_pixels_across_each_cbed_pattern
should be considered read-only.
- property pre_serialization_funcs
dict: The pre-serialization functions.
pre_serialization_funcs
has the same keys as the attributevalidation_and_conversion_funcs
, which is also a dict object.Let
validation_and_conversion_funcs
andcore_attrs
denote the attributesvalidation_and_conversion_funcs
andcore_attrs
respectively, the last of which being a dict object as well.For each dict key
key
incore_attrs
,pre_serialization_funcs[key](core_attrs[key])
is expected to yield a serializable object, i.e. it should yield an object that can be passed into the functionjson.dumps
without raising an exception.Note that
pre_serialization_funcs
should be considered read-only.
- pre_serialize()
Pre-serialize instance.
- Returns:
- serializable_repdict
A serializable representation of an instance.
- update(new_core_attr_subset_candidate, skip_validation_and_conversion=False)
Update a subset of the core attributes.
- Parameters:
- new_core_attr_subset_candidatedict, optional
A dict object.
- skip_validation_and_conversionbool, optional
Let
validation_and_conversion_funcs
andcore_attrs
denote the attributesvalidation_and_conversion_funcs
andcore_attrs
respectively, both of which being dict objects.If
skip_validation_and_conversion
is set toFalse
, then for each keykey
incore_attrs
that is also innew_core_attr_subset_candidate
,core_attrs[key]
is set tovalidation_and_conversion_funcs[key] (new_core_attr_subset_candidate)
.Otherwise, if
skip_validation_and_conversion
is set toTrue
, then for each keykey
incore_attrs
that is also innew_core_attr_subset_candidate
,core_attrs[key]
is set tonew_core_attr_subset_candidate[key]
. This option is desired primarily when the user wants to avoid potentially expensive deep copies and/or conversions of the dict values ofnew_core_attr_subset_candidate
, as it is guaranteed that no copies or conversions are made in this case.
- property validation_and_conversion_funcs
dict: The validation and conversion functions.
The keys of
validation_and_conversion_funcs
are the names of the constructor parameters, excludingskip_validation_and_conversion
if it exists as a construction parameter.Let
core_attrs
denote the attributecore_attrs
, which is also a dict object.For each dict key
key
incore_attrs
,validation_and_conversion_funcs[key](core_attrs)
is expected to not raise an exception.Note that
validation_and_conversion_funcs
should be considered read-only.