qten.optim
Module reference for qten.optim.
optim
Optimization-oriented module base classes for QTen tensors.
This module bridges QTen's labelled Tensor
objects with PyTorch's torch.nn.Module
parameter and buffer machinery. Assigning a QTen tensor to a
Module attribute keeps the public QTen wrapper while
registering the underlying torch.Tensor data as module-owned state.
Repository usage
Use Module when optimizable QTen tensors should be
managed by PyTorch optimizers while still carrying QTen dimension metadata.
Use nograd_tensors() to mark tensor attributes
that should be buffers rather than trainable parameters.
ModuleType
module-attribute
ModuleType = TypeVar('ModuleType', bound=type['Module'])
Type variable representing a subclass object of Module.
This is used to type the @nograd_tensors(...) class decorator so that the
decorated class preserves its original class type instead of being widened to a
plain type[Module].
TENSOR_PARAM_PREFIX
module-attribute
TENSOR_PARAM_PREFIX = 'tensor:'
Prefix used for the hidden nn.Parameter names registered for wrapped tensors.
When a public module attribute such as self.weight is assigned a
Tensor, the wrapper object remains accessible under
the original attribute name while the actual PyTorch parameter is registered
under f"{TENSOR_PARAM_PREFIX}{name}". This keeps the wrapper and the
registered parameter distinct.
TENSOR_BUFFER_PREFIX
module-attribute
TENSOR_BUFFER_PREFIX = 'buffer:'
Prefix used for the hidden buffers registered for no-grad wrapped tensors.
When a public module attribute such as self.basis is assigned a
Tensor and its name is listed in
__nograd_tensors__, the underlying torch.Tensor is registered under
f"{TENSOR_BUFFER_PREFIX}{name}" while the public attribute remains a
QTen Tensor wrapper.
Module
Module()
Bases: Functional, Module, DeviceBounded
QTen base module combining multiple dispatch, device tracking, and tensor wrapping.
This class extends
torch.nn.Module
with two QTen-specific behaviors:
- Public attributes may be assigned
Tensorobjects directly. The underlyingtorch.Tensordata is automatically copied into module-owned storage. Trainable tensor attributes are registered asnn.Parametervalues, while names listed in__nograd_tensors__are registered as buffers. In both cases the public attribute remains a QTenTensorwrapper. - The module is also a
Functional, so calling the module dispatches through QTen's multiple-dispatch mechanism rather than PyTorch's usualforwardpath.
The module additionally tracks its logical Device
and automatically moves assigned DeviceBounded
values onto the same device.
This class is also useful as a structured container for optimizable tensors,
even when the object is not meant to behave like a trainable "model" with a
forward pass. In that usage, assign the tensors you want PyTorch to manage
as public attributes, optimize them through the module's parameter
interface, and call export() when you need an independent
non-differentiable Tensor snapshot for
downstream use outside the module.
Ownership semantics are important:
- Before assignment, a Tensor is functionally
defined by its own wrapper value.
- After assignment to a Module attribute, the wrapper becomes
module-bound: its data points at module-owned storage, either a
registered nn.Parameter or registered buffer depending on
whether the name is marked in __nograd_tensors__.
- Reading self.weight still returns a QTen Tensor wrapper, but
its mutable/autograd state is now governed by the containing module and
PyTorch parameter machinery.
- No-grad tensor attributes remain part of the module's buffer state, so
they follow standard PyTorch buffer behavior for device moves and
serialization while remaining excluded from optimization.
- Assignment is owning-by-value: the module copies the assigned tensor data
into its own owned storage rather than aliasing the caller's tensor
storage.
- Use export() or export_all() if you need standalone tensor
values whose subsequent state cannot be changed implicitly by optimizer
steps, reassignment, or other module-side updates.
Initialize the PyTorch module state and default the logical device to CPU.
Source code in src/qten/optim.py
168 169 170 171 172 173 | |
__nograd_tensors__
class-attribute
instance-attribute
__nograd_tensors__: frozenset[str] = frozenset()
Names of public Tensor attributes stored as buffers.
This is a class-level annotation populated by
nograd_tensors(). The
default value is an empty set, meaning assigned tensors are registered as
parameters unless explicitly annotated otherwise.
__call__
class-attribute
instance-attribute
__call__ = __call__
device
property
device: Device
Return the current logical device recorded for this module.
Returns:
| Type | Description |
|---|---|
Device
|
The logical device associated with the module. |
export
export(name: str) -> Tensor
Export a module-owned tensor as an independent, non-differentiable snapshot.
This is intended for extracting a tensor attribute from the module for downstream use without preserving autograd linkage or shared storage with the original module-owned data. This is the recommended way to obtain a standalone tensor value from a module that is being used as a wrapper around optimizable tensors rather than purely as a callable model. This works uniformly for both parameter-backed tensors and buffer-backed no-grad tensors.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the public module attribute to export. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
A detached cloned tensor with the same dims as the module attribute. |
Raises:
| Type | Description |
|---|---|
AttributeError
|
If the module has no attribute named |
TypeError
|
If the named attribute is not a |
Source code in src/qten/optim.py
175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 | |
export_all
export_all() -> FrozenDict[str, Tensor]
Export all public tensor attributes from this module tree.
This recursively traverses nested Module instances and returns
detached cloned tensor snapshots keyed by public dotted attribute names,
for example "weight" or "inner.basis".
Returns:
| Type | Description |
|---|---|
FrozenDict[str, Tensor]
|
Mapping from public tensor names to independent exported tensors. |
Source code in src/qten/optim.py
209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 | |
freeze
freeze() -> Self
Disable gradient tracking for all module-owned parameters.
This applies recursively through submodules using PyTorch's parameter
traversal and returns self for fluent usage. Buffer-backed tensor
state declared via @nograd_tensors(...) is not registered as
parameters and therefore remains unaffected.
Returns:
| Type | Description |
|---|---|
Self
|
This module after all parameters have been frozen. |
Source code in src/qten/optim.py
231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 | |
unfreeze
unfreeze() -> Self
Enable gradient tracking for all module-owned parameters.
This applies recursively through submodules using PyTorch's parameter
traversal and returns self for fluent usage. Buffer-backed tensor
state declared via @nograd_tensors(...) is not registered as
parameters and therefore remains unaffected.
Returns:
| Type | Description |
|---|---|
Self
|
This module after all parameters have been unfrozen. |
Source code in src/qten/optim.py
249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 | |
__setattr__
__setattr__(name: str, value) -> None
Assign an attribute, wrapping Tensor as module-owned state.
Behavior
- Any assigned
DeviceBoundedvalue is moved toself.devicefirst. - Any assigned
Tensoris copied into module-owned storage. - For names listed in
__nograd_tensors__, the copied tensor is registered as a hidden buffer. - Otherwise the copied tensor is registered as a hidden
torch.nn.Parameter. - The public attribute itself remains a QTen
Tensorwrapper whosedatapoints at the owned buffer or registered parameter. - If a non-
Tensorvalue replaces a previously wrapped tensor attribute, the hidden parameter/buffer registration is removed. - Tensor assignment is owning-by-value: the module clones tensor data before storing it as a parameter or buffer, so later optimizer steps or in-place edits on the module do not alias the caller's original tensor storage.
In other words, assigning a Tensor transfers ownership of its
mutable/autograd state to the module. The attribute remains convenient
to use as a QTen wrapper, but it should now be treated as
module-bound state rather than an isolated functional value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Attribute name being assigned. |
required |
value
|
Any
|
Value to assign. |
required |
Source code in src/qten/optim.py
355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 | |
__delattr__
__delattr__(name: str) -> None
Delete an attribute and remove any hidden tensor storage tied to it.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Attribute name to delete. |
required |
Source code in src/qten/optim.py
415 416 417 418 419 420 421 422 423 424 425 426 | |
to_device
to_device(device: Device) -> Self
Move the module and all owned tensor state to the specified device.
This delegates to
torch.nn.Module.to,
which already applies the
move recursively to parameters and buffers. Public tensor wrappers are
then refreshed to point at the moved owned storage before the logical
device record is updated.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
device
|
Device
|
Target logical device. |
required |
Returns:
| Type | Description |
|---|---|
Self
|
This module after the move. |
See Also
torch.nn.Module.to
PyTorch method used to move registered parameters and buffers.
Source code in src/qten/optim.py
430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
cpu
cpu() -> Self
Return a copy of this object residing on the CPU device.
Returns:
| Type | Description |
|---|---|
Self
|
A copy of this object on the logical CPU device. |
Source code in src/qten/utils/devices.py
191 192 193 194 195 196 197 198 199 200 | |
gpu
gpu(index: Optional[int] = None) -> Self
Return a copy of this object residing on a GPU device.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
index
|
Optional[int]
|
Optional CUDA device index. This should only be set when the target GPU backend supports multiple devices (e.g. CUDA). If not provided, the current device will be used. |
`None`
|
Returns:
| Type | Description |
|---|---|
Self
|
A copy of this object on the specified GPU device. |
Source code in src/qten/utils/devices.py
202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 | |
register
classmethod
register(obj_type: type)
Register a function defining the action of the Functional on a specific object type.
This method returns a decorator. The decorated function should accept
the functional instance as its first argument and an object of
obj_type as its second argument. Any keyword arguments passed to
invoke() are forwarded to the
decorated function.
Dispatch is resolved at call time via MRO, so only the exact
(obj_type, cls) key is stored here. Resolution later searches both:
- the MRO of the runtime object type,
- the MRO of the runtime functional type.
This means registrations on a functional superclass are inherited by subclass functionals unless a more specific registration overrides them.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obj_type
|
type
|
The type of object the function applies to. |
required |
Returns:
| Type | Description |
|---|---|
Callable
|
A decorator that registers the function for the specified object type. |
Examples:
@MyFunctional.register(MyObject)
def _(functional: MyFunctional, obj: MyObject) -> MyObject:
...
Source code in src/qten/abstracts.py
488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 | |
get_applicable_types
staticmethod
get_applicable_types() -> tuple[type, ...]
Get all object types that can be applied by this Functional.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cls
|
Type[Functional]
|
Functional class whose direct registrations should be inspected. |
required |
Returns:
| Type | Description |
|---|---|
Tuple[Type, ...]
|
A tuple of all registered object types that this |
Source code in src/qten/abstracts.py
570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 | |
allows
allows(obj: Any) -> bool
Check if this Functional can be applied on the given object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obj
|
Any
|
The object to check for applicability. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if this |
Notes
Applicability is checked using the same inherited dispatch rules as
invoke(): both the object's MRO
and the functional-class MRO are searched.
Source code in src/qten/abstracts.py
591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 | |
invoke
invoke(obj: Any, **kwargs) -> Any
Apply this functional to obj using registered multimethod dispatch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obj
|
Any
|
Runtime object to dispatch on. |
required |
**kwargs
|
Any
|
Additional keyword arguments forwarded to the resolved implementation. |
{}
|
Returns:
| Type | Description |
|---|---|
Any
|
Result produced by the resolved registered method. |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
If no registration exists for the runtime pair
|
Source code in src/qten/abstracts.py
613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 | |
nograd_tensors
nograd_tensors(
*names: str,
) -> Callable[[ModuleType], ModuleType]
Mark Tensor attributes as no-grad module state.
The decorator stores the provided attribute names on the target class in the
__nograd_tensors__ class attribute. When an instance later assigns a
Tensor to one of those names,
Module.__setattr__() copies detached data
into module-owned buffer storage rather than registering it as a PyTorch
parameter.
The annotation is inherited. Decorating a subclass extends the inherited set rather than replacing it.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*names
|
str
|
Attribute names whose assigned |
()
|
Returns:
| Type | Description |
|---|---|
Callable[[ModuleType], ModuleType]
|
A class decorator that annotates the target |
Examples:
@nograd_tensors("basis", "projector")
class MyModule(Module):
...
Source code in src/qten/optim.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | |