romtools.vector_space.utils.scaler
Notes
The scaler class is used to performed scaled POD. Scaling is applied to tensors of shape $\mathbb{R}^{ N_{\mathrm{vars}} \times N_{\mathrm{x}} \times N_s}$. These tensors are then reshaped into matrices when performing SVD.
Theory
What is scaled POD, and why would I do it?
Standard POD computes a basis that minimizes the projection error in a standard Euclidean $\ell^2$ inner product, i.e., for a snapshot matrix $\mathbf{S} \in \mathbb{R}^{ N_{\mathrm{vars}} N_{\mathrm{x}} \times N_s}$, POD computes the basis by solving the minimization problem (assuming no affine offset) $$ \boldsymbol \Phi = \underset{ \boldsymbol \Phi_{*} \in \mathbb{R}^{ N_{\mathrm{vars}} N_{\mathrm{x}} \times K} | \boldsymbol \Phi_{*}^T \boldsymbol \Phi_{*} = \mathbf{I}}{ \mathrm{arg \; min} } \| \Phi_{*} \Phi_{*}^T \mathbf{S} - \mathbf{S} \|_2.$$ In this minimization problem, errors are measured in a standard $\ell^2$ norm. For most practical applications, where our snapshot matrix involves variables of different scales, this norm does not make sense (both intuitively, and on dimensional grounds). As a practical example, consider fluid dynamics where the total energy is orders of magnitude larger than the density.
One of the most common approaches for mitigating this issue is to perform scaled POD. In scaled POD, we solve a minimization problem on a scaled snapshot matrix. Defining $\mathbf{S}_{*} = \mathbf{W}^{-1} \mathbf{S}$, where $\mathbf{W}$ is a weighting matrix (e.g., a diagonal matrix containing the max absolute value of each state variable), we compute the basis as the solution to the minimization problem $$ \boldsymbol \Phi = \mathbf{W} \underset{ \boldsymbol \Phi_{*} \in \mathbb{R}^{N_{\mathrm{vars}} N_{\mathrm{x}} \times K} |\boldsymbol \Phi_{*}^T \boldsymbol \Phi_{*} = \mathbf{I}}{ \mathrm{arg \; min} } \| \Phi_{*} \Phi_{*}^T \mathbf{S}_{*} - \mathbf{S}_{*} \|_2.$$
The Scaler encapsulates this information.
API
1# 2# ************************************************************************ 3# 4# ROM Tools and Workflows 5# Copyright 2019 National Technology & Engineering Solutions of Sandia,LLC 6# (NTESS) 7# 8# Under the terms of Contract DE-NA0003525 with NTESS, the 9# U.S. Government retains certain rights in this software. 10# 11# ROM Tools and Workflows is licensed under BSD-3-Clause terms of use: 12# 13# Redistribution and use in source and binary forms, with or without 14# modification, are permitted provided that the following conditions 15# are met: 16# 17# 1. Redistributions of source code must retain the above copyright 18# notice, this list of conditions and the following disclaimer. 19# 20# 2. Redistributions in binary form must reproduce the above copyright 21# notice, this list of conditions and the following disclaimer in the 22# documentation and/or other materials provided with the distribution. 23# 24# 3. Neither the name of the copyright holder nor the names of its 25# contributors may be used to endorse or promote products derived 26# from this software without specific prior written permission. 27# 28# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 29# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 30# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 31# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE 32# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, 33# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 34# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 35# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 36# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 37# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 38# IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 39# POSSIBILITY OF SUCH DAMAGE. 40# 41# Questions? Contact Eric Parish (ejparis@sandia.gov) 42# 43# ************************************************************************ 44# 45 46''' 47--- 48##**Notes** 49The scaler class is used to performed scaled POD. 50Scaling is applied to tensors of shape $\mathbb{R}^{ N_{\\mathrm{vars}} \\times N_{\\mathrm{x}} \\times N_s}$. 51These tensors are then reshaped into matrices when performing SVD. 52 53___ 54##**Theory** 55 56*What is scaled POD, and why would I do it?* 57 58Standard POD computes a basis that minimizes the projection error in a standard Euclidean $\\ell^2$ inner product, 59i.e., for a snapshot matrix $\\mathbf{S} \\in \\mathbb{R}^{ N_{\\mathrm{vars}} N_{\\mathrm{x}} \\times N_s}$, 60POD computes the basis by solving the minimization problem (assuming no affine offset) 61$$ \\boldsymbol \\Phi = \\underset{ \\boldsymbol \\Phi_{\\*} \\in \\mathbb{R}^{ N_{\\mathrm{vars}} N_{\\mathrm{x}} 62\\times K} | \\boldsymbol \\Phi_{\\*}^T \\boldsymbol \\Phi_{\\*} = \\mathbf{I}}{ \\mathrm{arg \\; min} } \\| \\Phi_{\\*} 63\\Phi_{\\*}^T \\mathbf{S} - \\mathbf{S} \\|_2.$$ 64In this minimization problem, errors are measured in a standard $\\ell^2$ norm. 65For most practical applications, where our snapshot matrix involves variables of different scales, 66this norm does not make sense (both intuitively, and on dimensional grounds). 67As a practical example, consider fluid dynamics where the total energy is orders of magnitude larger than the density. 68 69One of the most common approaches for mitigating this issue is to perform scaled POD. 70In scaled POD, we solve a minimization problem on a scaled snapshot matrix. 71Defining $\\mathbf{S}_{\\*} = \\mathbf{W}^{-1} \\mathbf{S}$, where $\\mathbf{W}$ is a weighting matrix 72(e.g., a diagonal matrix containing the max absolute value of each state variable), 73we compute the basis as the solution to the minimization problem 74$$ \\boldsymbol \\Phi = \\mathbf{W} \\underset{ \\boldsymbol \\Phi_{\\*} \\in \\mathbb{R}^{N_{\\mathrm{vars}} N_{\\mathrm{x}} 75\\times K} |\\boldsymbol \\Phi_{\\*}^T \\boldsymbol \\Phi_{\\*} = \\mathbf{I}}{ \\mathrm{arg \\; min} } \\| \\Phi_{\\*} 76\\Phi_{\\*}^T \\mathbf{S}_{\\*} - \\mathbf{S}_{\\*} \\|_2.$$ 77 78The Scaler encapsulates this information. 79 80___ 81##**API** 82''' 83 84from typing import Protocol 85import numpy as np 86import romtools.linalg.linalg as la 87 88 89class Scaler(Protocol): 90 ''' 91 Interface for the Scaler class. 92 ''' 93 94 def pre_scale(self, data_tensor: np.ndarray) -> None: 95 ''' 96 Scales the snapshot matrix in place before performing SVD 97 ''' 98 ... 99 100 def post_scale(self, data_tensor: np.ndarray) -> None: 101 ''' 102 Scales the left singular vectors in place after performing SVD 103 ''' 104 ... 105 106 107class NoOpScaler: 108 ''' 109 No op implementation 110 111 This class conforms to `Scaler` protocol. 112 ''' 113 114 def __init__(self) -> None: 115 pass 116 117 def pre_scale(self, data_tensor: np.ndarray): 118 '''Does not alter the input data matrix.''' 119 pass 120 121 def post_scale(self, data_tensor): 122 '''Does not alter the input data matrix.''' 123 pass 124 125 126class VectorScaler: 127 ''' 128 Concrete implementation designed to scale snapshot matrices by a vector. 129 For a snapshot tensor $\\mathbf{S} \\in \\mathbb{R}^{N_{\\mathrm{u}} \\times N \\times K}$, the VectorScaler 130 accepts in a scaling vector $\\mathbf{v} \\in \\mathbb{R}^{N}$, and scales by 131 $$\\mathbf{S}^* = \\mathrm{diag}(\\mathbf{v})^{-1} \\mathbf{S}$$ 132 before performing POD (i.e., POD is performed on $\\mathbf{S}^*$). After POD is performed, the bases 133 are post-scaled by $$\\boldsymbol \\Phi = \\mathrm{diag}(\\mathbf{v}) \\mathbf{U}$$ 134 135 **Note that scaling can cause bases to not be orthonormal; we do not 136 recommend using scalers with the NoOpOrthonormalizer** 137 138 This class conforms to `Scaler` protocol. 139 ''' 140 141 def __init__(self, scaling_vector) -> None: 142 ''' 143 Constructor for the VectorScaler. 144 145 Args: 146 scaling_vector: Array containing the scaling vector for each row 147 in the snapshot matrix. 148 149 This constructor initializes the VectorScaler with the specified 150 scaling vector. 151 ''' 152 self.__scaling_vector_matrix = scaling_vector 153 self.__scaling_vector_matrix_inv = 1.0 / scaling_vector 154 155 def pre_scale(self, data_tensor: np.ndarray) -> None: 156 ''' 157 Scales the input data matrix in place using the inverse of the scaling vector. 158 159 Args: 160 data_tensor (np.ndarray): The input data matrix to be scaled. 161 ''' 162 data_tensor *= self.__scaling_vector_matrix_inv[None, :, None] 163 164 def post_scale(self, data_tensor: np.ndarray) -> None: 165 ''' 166 Scales the input data matrix in place using the scaling vector. 167 168 Args: 169 data_tensor (np.ndarray): The input data matrix to be scaled. 170 ''' 171 data_tensor *= self.__scaling_vector_matrix[None, :, None] 172 173 174class ScalarScaler: 175 ''' 176 Applies a scalar scale factor 177 178 This class conforms to `Scaler` protocol. 179 ''' 180 181 def __init__(self, factor: float = 1.0) -> None: 182 self._factor = factor 183 184 def pre_scale(self, data_tensor: np.ndarray) -> np.ndarray: 185 ''' 186 Scales the input data matrix in place using the reciprocal of the input factor. 187 188 Args: 189 data_tensor (np.ndarray): The input data matrix to be scaled. 190 ''' 191 data_tensor /= self._factor 192 193 def post_scale(self, data_tensor: np.ndarray) -> np.ndarray: 194 ''' 195 Scales the input data matrix in place using the input factor. 196 197 Args: 198 data_tensor (np.ndarray): The input data matrix to be scaled. 199 ''' 200 data_tensor *= self._factor 201 202 203class VariableScaler: 204 ''' 205 Concrete implementation designed for snapshot matrices involving multiple 206 state variables. 207 208 This class is designed to scale a data matrix comprising multiple states 209 (e.g., for the Navier--Stokes, rho, rho u, rhoE) 210 211 This scaler will scale each variable based on 212 - max-abs scaling: for the $i$th state variable $u_i$, we will compute the scaling as 213 $s_i = \\mathrm{max}( \\mathrm{abs}( S_i ) )$, where $S_i$ denotes the snapshot matrix of the $i$th variable. 214 - mean abs: for the $i$th state variable $u_i$, we will compute the scaling as 215 $s_i = \\mathrm{mean}( \\mathrm{abs}( S_i ) )$, where $S_i$ denotes the snapshot matrix of the $i$th variable. 216 - variance: for the $i$th state variable $u_i$, we will compute the scaling as 217 $s_i = \\mathrm{std}( S_i ) $, where $S_i$ denotes the snapshot matrix of the $i$th variable. 218 219 This class conforms to `Scaler` protocol. 220 ''' 221 222 def __init__(self, scaling_type) -> None: 223 ''' 224 Constructor for the VariableScaler. 225 226 Args: 227 scaling_type (str): The scaling method to use ('max_abs', 228 'mean_abs', or 'variance'). 229 230 This constructor initializes the VariableScaler with the specified 231 scaling type, variable ordering, and number of variables. 232 ''' 233 self.__scaling_type = scaling_type 234 self.have_scales_been_initialized = False 235 self.var_scales_ = None 236 237 def initialize_scalings(self, data_tensor: np.ndarray) -> None: 238 ''' 239 Initializes the scaling factors for each state variable based on the 240 specified method. 241 242 Args: 243 data_tensor (np.ndarray): The input data matrix. 244 ''' 245 n_var = data_tensor.shape[0] 246 self.var_scales_ = np.ones(n_var) 247 for i in range(n_var): 248 ith_var = data_tensor[i] 249 if self.__scaling_type == "max_abs": 250 var_scale = la.max(abs(ith_var)) 251 elif self.__scaling_type == "mean_abs": 252 var_scale = la.mean(abs(ith_var)) 253 elif self.__scaling_type == "variance": 254 var_scale = la.std(ith_var) 255 256 # in case of a zero field (e.g., 2D) 257 if var_scale < 1e-10: 258 var_scale = 1.0 259 self.var_scales_[i] = var_scale 260 self.have_scales_been_initialized = True 261 262 # These are all inplace operations 263 def pre_scale(self, data_tensor: np.ndarray) -> None: 264 ''' 265 Scales the input data matrix in place before processing, taking into account 266 the previously initialized scaling factors. 267 268 Args: 269 data_tensor (np.ndarray): The input data matrix to be scaled. 270 ''' 271 n_var = data_tensor.shape[0] 272 if self.have_scales_been_initialized: 273 pass 274 else: 275 self.initialize_scalings(data_tensor) 276 # scale each field (variable scaling) 277 for i in range(n_var): 278 data_tensor[i] /= self.var_scales_[i] 279 280 def post_scale(self, data_tensor: np.ndarray) -> None: 281 ''' 282 Scales the input data matrix in place using the scaling vector. 283 284 Args: 285 data_tensor (np.ndarray): The input data matrix to be scaled. 286 ''' 287 assert self.have_scales_been_initialized, "Scales in VariableScaler have not been initialized" 288 # scale each field 289 n_var = data_tensor.shape[0] 290 for i in range(n_var): 291 data_tensor[i] *= self.var_scales_[i] 292 293class VariableAndVectorScaler: 294 ''' 295 Concrete implementation designed to scale snapshot matrices involving 296 multiple state variables by both the variable magnitudes and an additional 297 vector. This is particularly useful when wishing to perform POD for, 298 e.g., a finite volume method where we want to scale by the cell volumes as 299 well as the variable magnitudes. This implementation combines the 300 VectorScaler and VariableScaler classes. 301 302 This class conforms to `Scaler` protocol. 303 ''' 304 305 def __init__(self, scaling_vector, scaling_type) -> None: 306 ''' 307 Constructor for the VariableAndVectorScaler. 308 309 Args: 310 scaling_vector: Array containing the scaling vector for each row 311 in the snapshot matrix. 312 scaling_type: Scaling method ('max_abs', 313 'mean_abs', or 'variance') for variable magnitudes. 314 315 This constructor initializes the `VariableAndVectorScaler` with the 316 specified parameters. 317 ''' 318 self.__my_variable_scaler = VariableScaler(scaling_type) 319 self.__my_vector_scaler = VectorScaler(scaling_vector) 320 321 def pre_scale(self, data_tensor: np.ndarray) -> None: 322 ''' 323 Scales the input data matrix in place before processing, first using the 324 `VariableScaler` and then the `VectorScaler`. 325 326 Args: 327 data_tensor (np.ndarray): The input data matrix to be scaled. 328 ''' 329 self.__my_variable_scaler.pre_scale(data_tensor) 330 self.__my_vector_scaler.pre_scale(data_tensor) 331 332 def post_scale(self, data_tensor: np.ndarray) -> None: 333 ''' 334 Scales the input data matrix in place after processing, first using the 335 `VectorScaler` and then the `VariableScaler`. 336 337 Args: 338 data_tensor (np.ndarray): The input data matrix to be scaled. 339 ''' 340 self.__my_vector_scaler.post_scale(data_tensor) 341 self.__my_variable_scaler.post_scale(data_tensor)
90class Scaler(Protocol): 91 ''' 92 Interface for the Scaler class. 93 ''' 94 95 def pre_scale(self, data_tensor: np.ndarray) -> None: 96 ''' 97 Scales the snapshot matrix in place before performing SVD 98 ''' 99 ... 100 101 def post_scale(self, data_tensor: np.ndarray) -> None: 102 ''' 103 Scales the left singular vectors in place after performing SVD 104 ''' 105 ...
Interface for the Scaler class.
1771def _no_init_or_replace_init(self, *args, **kwargs): 1772 cls = type(self) 1773 1774 if cls._is_protocol: 1775 raise TypeError('Protocols cannot be instantiated') 1776 1777 # Already using a custom `__init__`. No need to calculate correct 1778 # `__init__` to call. This can lead to RecursionError. See bpo-45121. 1779 if cls.__init__ is not _no_init_or_replace_init: 1780 return 1781 1782 # Initially, `__init__` of a protocol subclass is set to `_no_init_or_replace_init`. 1783 # The first instantiation of the subclass will call `_no_init_or_replace_init` which 1784 # searches for a proper new `__init__` in the MRO. The new `__init__` 1785 # replaces the subclass' old `__init__` (ie `_no_init_or_replace_init`). Subsequent 1786 # instantiation of the protocol subclass will thus use the new 1787 # `__init__` and no longer call `_no_init_or_replace_init`. 1788 for base in cls.__mro__: 1789 init = base.__dict__.get('__init__', _no_init_or_replace_init) 1790 if init is not _no_init_or_replace_init: 1791 cls.__init__ = init 1792 break 1793 else: 1794 # should not happen 1795 cls.__init__ = object.__init__ 1796 1797 cls.__init__(self, *args, **kwargs)
108class NoOpScaler: 109 ''' 110 No op implementation 111 112 This class conforms to `Scaler` protocol. 113 ''' 114 115 def __init__(self) -> None: 116 pass 117 118 def pre_scale(self, data_tensor: np.ndarray): 119 '''Does not alter the input data matrix.''' 120 pass 121 122 def post_scale(self, data_tensor): 123 '''Does not alter the input data matrix.''' 124 pass
No op implementation
This class conforms to Scaler
protocol.
127class VectorScaler: 128 ''' 129 Concrete implementation designed to scale snapshot matrices by a vector. 130 For a snapshot tensor $\\mathbf{S} \\in \\mathbb{R}^{N_{\\mathrm{u}} \\times N \\times K}$, the VectorScaler 131 accepts in a scaling vector $\\mathbf{v} \\in \\mathbb{R}^{N}$, and scales by 132 $$\\mathbf{S}^* = \\mathrm{diag}(\\mathbf{v})^{-1} \\mathbf{S}$$ 133 before performing POD (i.e., POD is performed on $\\mathbf{S}^*$). After POD is performed, the bases 134 are post-scaled by $$\\boldsymbol \\Phi = \\mathrm{diag}(\\mathbf{v}) \\mathbf{U}$$ 135 136 **Note that scaling can cause bases to not be orthonormal; we do not 137 recommend using scalers with the NoOpOrthonormalizer** 138 139 This class conforms to `Scaler` protocol. 140 ''' 141 142 def __init__(self, scaling_vector) -> None: 143 ''' 144 Constructor for the VectorScaler. 145 146 Args: 147 scaling_vector: Array containing the scaling vector for each row 148 in the snapshot matrix. 149 150 This constructor initializes the VectorScaler with the specified 151 scaling vector. 152 ''' 153 self.__scaling_vector_matrix = scaling_vector 154 self.__scaling_vector_matrix_inv = 1.0 / scaling_vector 155 156 def pre_scale(self, data_tensor: np.ndarray) -> None: 157 ''' 158 Scales the input data matrix in place using the inverse of the scaling vector. 159 160 Args: 161 data_tensor (np.ndarray): The input data matrix to be scaled. 162 ''' 163 data_tensor *= self.__scaling_vector_matrix_inv[None, :, None] 164 165 def post_scale(self, data_tensor: np.ndarray) -> None: 166 ''' 167 Scales the input data matrix in place using the scaling vector. 168 169 Args: 170 data_tensor (np.ndarray): The input data matrix to be scaled. 171 ''' 172 data_tensor *= self.__scaling_vector_matrix[None, :, None]
Concrete implementation designed to scale snapshot matrices by a vector. For a snapshot tensor $\mathbf{S} \in \mathbb{R}^{N_{\mathrm{u}} \times N \times K}$, the VectorScaler accepts in a scaling vector $\mathbf{v} \in \mathbb{R}^{N}$, and scales by $$\mathbf{S}^* = \mathrm{diag}(\mathbf{v})^{-1} \mathbf{S}$$ before performing POD (i.e., POD is performed on $\mathbf{S}^*$). After POD is performed, the bases are post-scaled by $$\boldsymbol \Phi = \mathrm{diag}(\mathbf{v}) \mathbf{U}$$
Note that scaling can cause bases to not be orthonormal; we do not recommend using scalers with the NoOpOrthonormalizer
This class conforms to Scaler
protocol.
142 def __init__(self, scaling_vector) -> None: 143 ''' 144 Constructor for the VectorScaler. 145 146 Args: 147 scaling_vector: Array containing the scaling vector for each row 148 in the snapshot matrix. 149 150 This constructor initializes the VectorScaler with the specified 151 scaling vector. 152 ''' 153 self.__scaling_vector_matrix = scaling_vector 154 self.__scaling_vector_matrix_inv = 1.0 / scaling_vector
Constructor for the VectorScaler.
Arguments:
- scaling_vector: Array containing the scaling vector for each row in the snapshot matrix.
This constructor initializes the VectorScaler with the specified scaling vector.
156 def pre_scale(self, data_tensor: np.ndarray) -> None: 157 ''' 158 Scales the input data matrix in place using the inverse of the scaling vector. 159 160 Args: 161 data_tensor (np.ndarray): The input data matrix to be scaled. 162 ''' 163 data_tensor *= self.__scaling_vector_matrix_inv[None, :, None]
Scales the input data matrix in place using the inverse of the scaling vector.
Arguments:
- data_tensor (np.ndarray): The input data matrix to be scaled.
165 def post_scale(self, data_tensor: np.ndarray) -> None: 166 ''' 167 Scales the input data matrix in place using the scaling vector. 168 169 Args: 170 data_tensor (np.ndarray): The input data matrix to be scaled. 171 ''' 172 data_tensor *= self.__scaling_vector_matrix[None, :, None]
Scales the input data matrix in place using the scaling vector.
Arguments:
- data_tensor (np.ndarray): The input data matrix to be scaled.
175class ScalarScaler: 176 ''' 177 Applies a scalar scale factor 178 179 This class conforms to `Scaler` protocol. 180 ''' 181 182 def __init__(self, factor: float = 1.0) -> None: 183 self._factor = factor 184 185 def pre_scale(self, data_tensor: np.ndarray) -> np.ndarray: 186 ''' 187 Scales the input data matrix in place using the reciprocal of the input factor. 188 189 Args: 190 data_tensor (np.ndarray): The input data matrix to be scaled. 191 ''' 192 data_tensor /= self._factor 193 194 def post_scale(self, data_tensor: np.ndarray) -> np.ndarray: 195 ''' 196 Scales the input data matrix in place using the input factor. 197 198 Args: 199 data_tensor (np.ndarray): The input data matrix to be scaled. 200 ''' 201 data_tensor *= self._factor
Applies a scalar scale factor
This class conforms to Scaler
protocol.
185 def pre_scale(self, data_tensor: np.ndarray) -> np.ndarray: 186 ''' 187 Scales the input data matrix in place using the reciprocal of the input factor. 188 189 Args: 190 data_tensor (np.ndarray): The input data matrix to be scaled. 191 ''' 192 data_tensor /= self._factor
Scales the input data matrix in place using the reciprocal of the input factor.
Arguments:
- data_tensor (np.ndarray): The input data matrix to be scaled.
194 def post_scale(self, data_tensor: np.ndarray) -> np.ndarray: 195 ''' 196 Scales the input data matrix in place using the input factor. 197 198 Args: 199 data_tensor (np.ndarray): The input data matrix to be scaled. 200 ''' 201 data_tensor *= self._factor
Scales the input data matrix in place using the input factor.
Arguments:
- data_tensor (np.ndarray): The input data matrix to be scaled.
204class VariableScaler: 205 ''' 206 Concrete implementation designed for snapshot matrices involving multiple 207 state variables. 208 209 This class is designed to scale a data matrix comprising multiple states 210 (e.g., for the Navier--Stokes, rho, rho u, rhoE) 211 212 This scaler will scale each variable based on 213 - max-abs scaling: for the $i$th state variable $u_i$, we will compute the scaling as 214 $s_i = \\mathrm{max}( \\mathrm{abs}( S_i ) )$, where $S_i$ denotes the snapshot matrix of the $i$th variable. 215 - mean abs: for the $i$th state variable $u_i$, we will compute the scaling as 216 $s_i = \\mathrm{mean}( \\mathrm{abs}( S_i ) )$, where $S_i$ denotes the snapshot matrix of the $i$th variable. 217 - variance: for the $i$th state variable $u_i$, we will compute the scaling as 218 $s_i = \\mathrm{std}( S_i ) $, where $S_i$ denotes the snapshot matrix of the $i$th variable. 219 220 This class conforms to `Scaler` protocol. 221 ''' 222 223 def __init__(self, scaling_type) -> None: 224 ''' 225 Constructor for the VariableScaler. 226 227 Args: 228 scaling_type (str): The scaling method to use ('max_abs', 229 'mean_abs', or 'variance'). 230 231 This constructor initializes the VariableScaler with the specified 232 scaling type, variable ordering, and number of variables. 233 ''' 234 self.__scaling_type = scaling_type 235 self.have_scales_been_initialized = False 236 self.var_scales_ = None 237 238 def initialize_scalings(self, data_tensor: np.ndarray) -> None: 239 ''' 240 Initializes the scaling factors for each state variable based on the 241 specified method. 242 243 Args: 244 data_tensor (np.ndarray): The input data matrix. 245 ''' 246 n_var = data_tensor.shape[0] 247 self.var_scales_ = np.ones(n_var) 248 for i in range(n_var): 249 ith_var = data_tensor[i] 250 if self.__scaling_type == "max_abs": 251 var_scale = la.max(abs(ith_var)) 252 elif self.__scaling_type == "mean_abs": 253 var_scale = la.mean(abs(ith_var)) 254 elif self.__scaling_type == "variance": 255 var_scale = la.std(ith_var) 256 257 # in case of a zero field (e.g., 2D) 258 if var_scale < 1e-10: 259 var_scale = 1.0 260 self.var_scales_[i] = var_scale 261 self.have_scales_been_initialized = True 262 263 # These are all inplace operations 264 def pre_scale(self, data_tensor: np.ndarray) -> None: 265 ''' 266 Scales the input data matrix in place before processing, taking into account 267 the previously initialized scaling factors. 268 269 Args: 270 data_tensor (np.ndarray): The input data matrix to be scaled. 271 ''' 272 n_var = data_tensor.shape[0] 273 if self.have_scales_been_initialized: 274 pass 275 else: 276 self.initialize_scalings(data_tensor) 277 # scale each field (variable scaling) 278 for i in range(n_var): 279 data_tensor[i] /= self.var_scales_[i] 280 281 def post_scale(self, data_tensor: np.ndarray) -> None: 282 ''' 283 Scales the input data matrix in place using the scaling vector. 284 285 Args: 286 data_tensor (np.ndarray): The input data matrix to be scaled. 287 ''' 288 assert self.have_scales_been_initialized, "Scales in VariableScaler have not been initialized" 289 # scale each field 290 n_var = data_tensor.shape[0] 291 for i in range(n_var): 292 data_tensor[i] *= self.var_scales_[i]
Concrete implementation designed for snapshot matrices involving multiple state variables.
This class is designed to scale a data matrix comprising multiple states (e.g., for the Navier--Stokes, rho, rho u, rhoE)
This scaler will scale each variable based on
- max-abs scaling: for the $i$th state variable $u_i$, we will compute the scaling as $s_i = \mathrm{max}( \mathrm{abs}( S_i ) )$, where $S_i$ denotes the snapshot matrix of the $i$th variable.
- mean abs: for the $i$th state variable $u_i$, we will compute the scaling as $s_i = \mathrm{mean}( \mathrm{abs}( S_i ) )$, where $S_i$ denotes the snapshot matrix of the $i$th variable.
- variance: for the $i$th state variable $u_i$, we will compute the scaling as $s_i = \mathrm{std}( S_i ) $, where $S_i$ denotes the snapshot matrix of the $i$th variable.
This class conforms to Scaler
protocol.
223 def __init__(self, scaling_type) -> None: 224 ''' 225 Constructor for the VariableScaler. 226 227 Args: 228 scaling_type (str): The scaling method to use ('max_abs', 229 'mean_abs', or 'variance'). 230 231 This constructor initializes the VariableScaler with the specified 232 scaling type, variable ordering, and number of variables. 233 ''' 234 self.__scaling_type = scaling_type 235 self.have_scales_been_initialized = False 236 self.var_scales_ = None
Constructor for the VariableScaler.
Arguments:
- scaling_type (str): The scaling method to use ('max_abs',
- 'mean_abs', or 'variance').
This constructor initializes the VariableScaler with the specified scaling type, variable ordering, and number of variables.
238 def initialize_scalings(self, data_tensor: np.ndarray) -> None: 239 ''' 240 Initializes the scaling factors for each state variable based on the 241 specified method. 242 243 Args: 244 data_tensor (np.ndarray): The input data matrix. 245 ''' 246 n_var = data_tensor.shape[0] 247 self.var_scales_ = np.ones(n_var) 248 for i in range(n_var): 249 ith_var = data_tensor[i] 250 if self.__scaling_type == "max_abs": 251 var_scale = la.max(abs(ith_var)) 252 elif self.__scaling_type == "mean_abs": 253 var_scale = la.mean(abs(ith_var)) 254 elif self.__scaling_type == "variance": 255 var_scale = la.std(ith_var) 256 257 # in case of a zero field (e.g., 2D) 258 if var_scale < 1e-10: 259 var_scale = 1.0 260 self.var_scales_[i] = var_scale 261 self.have_scales_been_initialized = True
Initializes the scaling factors for each state variable based on the specified method.
Arguments:
- data_tensor (np.ndarray): The input data matrix.
264 def pre_scale(self, data_tensor: np.ndarray) -> None: 265 ''' 266 Scales the input data matrix in place before processing, taking into account 267 the previously initialized scaling factors. 268 269 Args: 270 data_tensor (np.ndarray): The input data matrix to be scaled. 271 ''' 272 n_var = data_tensor.shape[0] 273 if self.have_scales_been_initialized: 274 pass 275 else: 276 self.initialize_scalings(data_tensor) 277 # scale each field (variable scaling) 278 for i in range(n_var): 279 data_tensor[i] /= self.var_scales_[i]
Scales the input data matrix in place before processing, taking into account the previously initialized scaling factors.
Arguments:
- data_tensor (np.ndarray): The input data matrix to be scaled.
281 def post_scale(self, data_tensor: np.ndarray) -> None: 282 ''' 283 Scales the input data matrix in place using the scaling vector. 284 285 Args: 286 data_tensor (np.ndarray): The input data matrix to be scaled. 287 ''' 288 assert self.have_scales_been_initialized, "Scales in VariableScaler have not been initialized" 289 # scale each field 290 n_var = data_tensor.shape[0] 291 for i in range(n_var): 292 data_tensor[i] *= self.var_scales_[i]
Scales the input data matrix in place using the scaling vector.
Arguments:
- data_tensor (np.ndarray): The input data matrix to be scaled.
294class VariableAndVectorScaler: 295 ''' 296 Concrete implementation designed to scale snapshot matrices involving 297 multiple state variables by both the variable magnitudes and an additional 298 vector. This is particularly useful when wishing to perform POD for, 299 e.g., a finite volume method where we want to scale by the cell volumes as 300 well as the variable magnitudes. This implementation combines the 301 VectorScaler and VariableScaler classes. 302 303 This class conforms to `Scaler` protocol. 304 ''' 305 306 def __init__(self, scaling_vector, scaling_type) -> None: 307 ''' 308 Constructor for the VariableAndVectorScaler. 309 310 Args: 311 scaling_vector: Array containing the scaling vector for each row 312 in the snapshot matrix. 313 scaling_type: Scaling method ('max_abs', 314 'mean_abs', or 'variance') for variable magnitudes. 315 316 This constructor initializes the `VariableAndVectorScaler` with the 317 specified parameters. 318 ''' 319 self.__my_variable_scaler = VariableScaler(scaling_type) 320 self.__my_vector_scaler = VectorScaler(scaling_vector) 321 322 def pre_scale(self, data_tensor: np.ndarray) -> None: 323 ''' 324 Scales the input data matrix in place before processing, first using the 325 `VariableScaler` and then the `VectorScaler`. 326 327 Args: 328 data_tensor (np.ndarray): The input data matrix to be scaled. 329 ''' 330 self.__my_variable_scaler.pre_scale(data_tensor) 331 self.__my_vector_scaler.pre_scale(data_tensor) 332 333 def post_scale(self, data_tensor: np.ndarray) -> None: 334 ''' 335 Scales the input data matrix in place after processing, first using the 336 `VectorScaler` and then the `VariableScaler`. 337 338 Args: 339 data_tensor (np.ndarray): The input data matrix to be scaled. 340 ''' 341 self.__my_vector_scaler.post_scale(data_tensor) 342 self.__my_variable_scaler.post_scale(data_tensor)
Concrete implementation designed to scale snapshot matrices involving multiple state variables by both the variable magnitudes and an additional vector. This is particularly useful when wishing to perform POD for, e.g., a finite volume method where we want to scale by the cell volumes as well as the variable magnitudes. This implementation combines the VectorScaler and VariableScaler classes.
This class conforms to Scaler
protocol.
306 def __init__(self, scaling_vector, scaling_type) -> None: 307 ''' 308 Constructor for the VariableAndVectorScaler. 309 310 Args: 311 scaling_vector: Array containing the scaling vector for each row 312 in the snapshot matrix. 313 scaling_type: Scaling method ('max_abs', 314 'mean_abs', or 'variance') for variable magnitudes. 315 316 This constructor initializes the `VariableAndVectorScaler` with the 317 specified parameters. 318 ''' 319 self.__my_variable_scaler = VariableScaler(scaling_type) 320 self.__my_vector_scaler = VectorScaler(scaling_vector)
Constructor for the VariableAndVectorScaler.
Arguments:
- scaling_vector: Array containing the scaling vector for each row
- in the snapshot matrix.
- scaling_type: Scaling method ('max_abs',
- 'mean_abs', or 'variance') for variable magnitudes.
This constructor initializes the VariableAndVectorScaler
with the
specified parameters.
322 def pre_scale(self, data_tensor: np.ndarray) -> None: 323 ''' 324 Scales the input data matrix in place before processing, first using the 325 `VariableScaler` and then the `VectorScaler`. 326 327 Args: 328 data_tensor (np.ndarray): The input data matrix to be scaled. 329 ''' 330 self.__my_variable_scaler.pre_scale(data_tensor) 331 self.__my_vector_scaler.pre_scale(data_tensor)
Scales the input data matrix in place before processing, first using the
VariableScaler
and then the VectorScaler
.
Arguments:
- data_tensor (np.ndarray): The input data matrix to be scaled.
333 def post_scale(self, data_tensor: np.ndarray) -> None: 334 ''' 335 Scales the input data matrix in place after processing, first using the 336 `VectorScaler` and then the `VariableScaler`. 337 338 Args: 339 data_tensor (np.ndarray): The input data matrix to be scaled. 340 ''' 341 self.__my_vector_scaler.post_scale(data_tensor) 342 self.__my_variable_scaler.post_scale(data_tensor)
Scales the input data matrix in place after processing, first using the
VectorScaler
and then the VariableScaler
.
Arguments:
- data_tensor (np.ndarray): The input data matrix to be scaled.