Skip to content

qten.optim

Module reference for qten.optim.

optim

Optimization-oriented module base classes for QTen tensors.

This module bridges QTen's labelled Tensor objects with PyTorch's torch.nn.Module parameter and buffer machinery. Assigning a QTen tensor to a Module attribute keeps the public QTen wrapper while registering the underlying torch.Tensor data as module-owned state.

Repository usage

Use Module when optimizable QTen tensors should be managed by PyTorch optimizers while still carrying QTen dimension metadata. Use nograd_tensors() to mark tensor attributes that should be buffers rather than trainable parameters.

ModuleType module-attribute

ModuleType = TypeVar('ModuleType', bound=type['Module'])

Type variable representing a subclass object of Module.

This is used to type the @nograd_tensors(...) class decorator so that the decorated class preserves its original class type instead of being widened to a plain type[Module].

TENSOR_PARAM_PREFIX module-attribute

TENSOR_PARAM_PREFIX = 'tensor:'

Prefix used for the hidden nn.Parameter names registered for wrapped tensors.

When a public module attribute such as self.weight is assigned a Tensor, the wrapper object remains accessible under the original attribute name while the actual PyTorch parameter is registered under f"{TENSOR_PARAM_PREFIX}{name}". This keeps the wrapper and the registered parameter distinct.

TENSOR_BUFFER_PREFIX module-attribute

TENSOR_BUFFER_PREFIX = 'buffer:'

Prefix used for the hidden buffers registered for no-grad wrapped tensors.

When a public module attribute such as self.basis is assigned a Tensor and its name is listed in __nograd_tensors__, the underlying torch.Tensor is registered under f"{TENSOR_BUFFER_PREFIX}{name}" while the public attribute remains a QTen Tensor wrapper.

Module

Module()

Bases: Functional, Module, DeviceBounded

QTen base module combining multiple dispatch, device tracking, and tensor wrapping.

This class extends torch.nn.Module with two QTen-specific behaviors:

  1. Public attributes may be assigned Tensor objects directly. The underlying torch.Tensor data is automatically copied into module-owned storage. Trainable tensor attributes are registered as nn.Parameter values, while names listed in __nograd_tensors__ are registered as buffers. In both cases the public attribute remains a QTen Tensor wrapper.
  2. The module is also a Functional, so calling the module dispatches through QTen's multiple-dispatch mechanism rather than PyTorch's usual forward path.

The module additionally tracks its logical Device and automatically moves assigned DeviceBounded values onto the same device.

This class is also useful as a structured container for optimizable tensors, even when the object is not meant to behave like a trainable "model" with a forward pass. In that usage, assign the tensors you want PyTorch to manage as public attributes, optimize them through the module's parameter interface, and call export() when you need an independent non-differentiable Tensor snapshot for downstream use outside the module.

Ownership semantics are important: - Before assignment, a Tensor is functionally defined by its own wrapper value. - After assignment to a Module attribute, the wrapper becomes module-bound: its data points at module-owned storage, either a registered nn.Parameter or registered buffer depending on whether the name is marked in __nograd_tensors__. - Reading self.weight still returns a QTen Tensor wrapper, but its mutable/autograd state is now governed by the containing module and PyTorch parameter machinery. - No-grad tensor attributes remain part of the module's buffer state, so they follow standard PyTorch buffer behavior for device moves and serialization while remaining excluded from optimization. - Assignment is owning-by-value: the module copies the assigned tensor data into its own owned storage rather than aliasing the caller's tensor storage. - Use export() or export_all() if you need standalone tensor values whose subsequent state cannot be changed implicitly by optimizer steps, reassignment, or other module-side updates.

Initialize the PyTorch module state and default the logical device to CPU.

Source code in src/qten/optim.py
168
169
170
171
172
173
def __init__(self):
    """
    Initialize the PyTorch module state and default the logical device to CPU.
    """
    nn.Module.__init__(self)
    self._device = Device("cpu")

__nograd_tensors__ class-attribute instance-attribute

__nograd_tensors__: frozenset[str] = frozenset()

Names of public Tensor attributes stored as buffers.

This is a class-level annotation populated by nograd_tensors(). The default value is an empty set, meaning assigned tensors are registered as parameters unless explicitly annotated otherwise.

__call__ class-attribute instance-attribute

__call__ = __call__

device property

device: Device

Return the current logical device recorded for this module.

Returns:

Type Description
Device

The logical device associated with the module.

export

export(name: str) -> Tensor

Export a module-owned tensor as an independent, non-differentiable snapshot.

This is intended for extracting a tensor attribute from the module for downstream use without preserving autograd linkage or shared storage with the original module-owned data. This is the recommended way to obtain a standalone tensor value from a module that is being used as a wrapper around optimizable tensors rather than purely as a callable model. This works uniformly for both parameter-backed tensors and buffer-backed no-grad tensors.

Parameters:

Name Type Description Default
name str

Name of the public module attribute to export.

required

Returns:

Type Description
Tensor

A detached cloned tensor with the same dims as the module attribute.

Raises:

Type Description
AttributeError

If the module has no attribute named name.

TypeError

If the named attribute is not a Tensor.

Source code in src/qten/optim.py
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
def export(self, name: str) -> Tensor:
    """
    Export a module-owned tensor as an independent, non-differentiable snapshot.

    This is intended for extracting a tensor attribute from the module for
    downstream use without preserving autograd linkage or shared storage
    with the original module-owned data. This is the recommended way to obtain
    a standalone tensor value from a module that is being used as a wrapper
    around optimizable tensors rather than purely as a callable model. This
    works uniformly for both parameter-backed tensors and buffer-backed
    no-grad tensors.

    Parameters
    ----------
    name : str
        Name of the public module attribute to export.

    Returns
    -------
    Tensor
        A detached cloned tensor with the same dims as the module attribute.

    Raises
    ------
    AttributeError
        If the module has no attribute named ``name``.
    TypeError
        If the named attribute is not a [`Tensor`][qten.linalg.tensors.Tensor].
    """
    tensor = getattr(self, name)
    if not isinstance(tensor, Tensor):
        raise TypeError(f"Attribute {name!r} is not a Tensor.")
    return tensor.detach().clone()

export_all

export_all() -> FrozenDict[str, Tensor]

Export all public tensor attributes from this module tree.

This recursively traverses nested Module instances and returns detached cloned tensor snapshots keyed by public dotted attribute names, for example "weight" or "inner.basis".

Returns:

Type Description
FrozenDict[str, Tensor]

Mapping from public tensor names to independent exported tensors.

Source code in src/qten/optim.py
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
def export_all(self) -> FrozenDict[str, Tensor]:
    """
    Export all public tensor attributes from this module tree.

    This recursively traverses nested [`Module`][qten.optim.Module] instances and returns
    detached cloned tensor snapshots keyed by public dotted attribute names,
    for example ``"weight"`` or ``"inner.basis"``.

    Returns
    -------
    FrozenDict[str, Tensor]
        Mapping from public tensor names to independent exported tensors.
    """
    exported: dict[str, Tensor] = {}
    for prefix, module in self.named_modules():
        if not isinstance(module, Module):
            continue
        for name, tensor in module._iter_public_tensors():
            full_name = f"{prefix}.{name}" if prefix else name
            exported[full_name] = tensor.detach().clone()
    return FrozenDict(exported)

freeze

freeze() -> Self

Disable gradient tracking for all module-owned parameters.

This applies recursively through submodules using PyTorch's parameter traversal and returns self for fluent usage. Buffer-backed tensor state declared via @nograd_tensors(...) is not registered as parameters and therefore remains unaffected.

Returns:

Type Description
Self

This module after all parameters have been frozen.

Source code in src/qten/optim.py
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
def freeze(self) -> Self:
    """
    Disable gradient tracking for all module-owned parameters.

    This applies recursively through submodules using PyTorch's parameter
    traversal and returns ``self`` for fluent usage. Buffer-backed tensor
    state declared via ``@nograd_tensors(...)`` is not registered as
    parameters and therefore remains unaffected.

    Returns
    -------
    Self
        This module after all parameters have been frozen.
    """
    for parameter in self.parameters():
        parameter.requires_grad_(False)
    return self

unfreeze

unfreeze() -> Self

Enable gradient tracking for all module-owned parameters.

This applies recursively through submodules using PyTorch's parameter traversal and returns self for fluent usage. Buffer-backed tensor state declared via @nograd_tensors(...) is not registered as parameters and therefore remains unaffected.

Returns:

Type Description
Self

This module after all parameters have been unfrozen.

Source code in src/qten/optim.py
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
def unfreeze(self) -> Self:
    """
    Enable gradient tracking for all module-owned parameters.

    This applies recursively through submodules using PyTorch's parameter
    traversal and returns ``self`` for fluent usage. Buffer-backed tensor
    state declared via ``@nograd_tensors(...)`` is not registered as
    parameters and therefore remains unaffected.

    Returns
    -------
    Self
        This module after all parameters have been unfrozen.
    """
    for parameter in self.parameters():
        parameter.requires_grad_(True)
    return self

__setattr__

__setattr__(name: str, value) -> None

Assign an attribute, wrapping Tensor as module-owned state.

Behavior
  • Any assigned DeviceBounded value is moved to self.device first.
  • Any assigned Tensor is copied into module-owned storage.
  • For names listed in __nograd_tensors__, the copied tensor is registered as a hidden buffer.
  • Otherwise the copied tensor is registered as a hidden torch.nn.Parameter.
  • The public attribute itself remains a QTen Tensor wrapper whose data points at the owned buffer or registered parameter.
  • If a non-Tensor value replaces a previously wrapped tensor attribute, the hidden parameter/buffer registration is removed.
  • Tensor assignment is owning-by-value: the module clones tensor data before storing it as a parameter or buffer, so later optimizer steps or in-place edits on the module do not alias the caller's original tensor storage.

In other words, assigning a Tensor transfers ownership of its mutable/autograd state to the module. The attribute remains convenient to use as a QTen wrapper, but it should now be treated as module-bound state rather than an isolated functional value.

Parameters:

Name Type Description Default
name str

Attribute name being assigned.

required
value Any

Value to assign.

required
Source code in src/qten/optim.py
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
def __setattr__(self, name: str, value) -> None:
    """
    Assign an attribute, wrapping [`Tensor`][qten.linalg.tensors.Tensor] as module-owned state.

    Behavior
    --------
    - Any assigned [`DeviceBounded`][qten.utils.devices.DeviceBounded] value is moved to
      ``self.device`` first.
    - Any assigned [`Tensor`][qten.linalg.tensors.Tensor] is copied into
      module-owned storage.
    - For names listed in ``__nograd_tensors__``, the copied tensor is
      registered as a hidden buffer.
    - Otherwise the copied tensor is registered as a hidden
      [`torch.nn.Parameter`](https://docs.pytorch.org/docs/stable/generated/torch.nn.Parameter.html).
    - The public attribute itself remains a QTen [`Tensor`][qten.linalg.tensors.Tensor] wrapper
      whose ``data`` points at the owned buffer or registered
      parameter.
    - If a non-[`Tensor`][qten.linalg.tensors.Tensor] value replaces a previously wrapped tensor
      attribute, the hidden parameter/buffer registration is removed.
    - Tensor assignment is owning-by-value: the module clones tensor data
      before storing it as a parameter or buffer, so later optimizer steps
      or in-place edits on the module do not alias the caller's original
      tensor storage.

    In other words, assigning a [`Tensor`][qten.linalg.tensors.Tensor] transfers ownership of its
    mutable/autograd state to the module. The attribute remains convenient
    to use as a QTen wrapper, but it should now be treated as
    module-bound state rather than an isolated functional value.

    Parameters
    ----------
    name : str
        Attribute name being assigned.
    value : Any
        Value to assign.
    """
    if isinstance(value, DeviceBounded):
        # Automatically move any assigned DeviceBounded objects to the same device as this module.
        value = value.to_device(self.device)
    if isinstance(value, Tensor):
        # Module assignment owns its parameter state by value rather than
        # aliasing the caller's tensor storage.
        source = value.data
        copied = source.detach().clone()
        if name in self.__nograd_tensors__:
            self._clear_tensor_parameter(name)
            self.register_buffer(self._tensor_buffer_name(name), copied)
            value = replace(
                value, data=self._buffers[self._tensor_buffer_name(name)]
            )
        else:
            self._clear_tensor_buffer(name)
            data = nn.Parameter(copied, requires_grad=source.requires_grad)
            nn.Module.__setattr__(self, self._tensor_parameter_name(name), data)
            value = replace(value, data=data)
    else:
        self._clear_tensor_parameter(name)
        self._clear_tensor_buffer(name)
    nn.Module.__setattr__(self, name, value)

__delattr__

__delattr__(name: str) -> None

Delete an attribute and remove any hidden tensor storage tied to it.

Parameters:

Name Type Description Default
name str

Attribute name to delete.

required
Source code in src/qten/optim.py
415
416
417
418
419
420
421
422
423
424
425
426
def __delattr__(self, name: str) -> None:
    """
    Delete an attribute and remove any hidden tensor storage tied to it.

    Parameters
    ----------
    name : str
        Attribute name to delete.
    """
    self._clear_tensor_parameter(name)
    self._clear_tensor_buffer(name)
    nn.Module.__delattr__(self, name)

to_device

to_device(device: Device) -> Self

Move the module and all owned tensor state to the specified device.

This delegates to torch.nn.Module.to, which already applies the move recursively to parameters and buffers. Public tensor wrappers are then refreshed to point at the moved owned storage before the logical device record is updated.

Parameters:

Name Type Description Default
device Device

Target logical device.

required

Returns:

Type Description
Self

This module after the move.

See Also

torch.nn.Module.to PyTorch method used to move registered parameters and buffers.

Source code in src/qten/optim.py
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
@override
def to_device(self, device: Device) -> Self:
    """
    Move the module and all owned tensor state to the specified device.

    This delegates to
    [`torch.nn.Module.to`](https://docs.pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.to),
    which already applies the
    move recursively to parameters and buffers. Public tensor wrappers are
    then refreshed to point at the moved owned storage before the logical
    device record is updated.

    Parameters
    ----------
    device : Device
        Target logical device.

    Returns
    -------
    Self
        This module after the move.

    See Also
    --------
    [`torch.nn.Module.to`](https://docs.pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.to)
        PyTorch method used to move registered parameters and buffers.
    """
    nn.Module.to(self, device.torch_device())
    for module in self.modules():
        if not isinstance(module, Module):
            continue
        module._device = device
        module._refresh_public_tensor_wrappers()
    return self

cpu

cpu() -> Self

Return a copy of this object residing on the CPU device.

Returns:

Type Description
Self

A copy of this object on the logical CPU device.

Source code in src/qten/utils/devices.py
191
192
193
194
195
196
197
198
199
200
def cpu(self) -> Self:
    """
    Return a copy of this object residing on the CPU device.

    Returns
    -------
    Self
        A copy of this object on the logical CPU device.
    """
    return self.to_device(Device("cpu"))

gpu

gpu(index: Optional[int] = None) -> Self

Return a copy of this object residing on a GPU device.

Parameters:

Name Type Description Default
index Optional[int]

Optional CUDA device index. This should only be set when the target GPU backend supports multiple devices (e.g. CUDA). If not provided, the current device will be used.

`None`

Returns:

Type Description
Self

A copy of this object on the specified GPU device.

Source code in src/qten/utils/devices.py
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
def gpu(self, index: Optional[int] = None) -> Self:
    """
    Return a copy of this object residing on a GPU device.

    Parameters
    ----------
    index : Optional[int], default=`None`
        Optional CUDA device index. This should only be set when the target
        GPU backend supports multiple devices (e.g. CUDA). If not provided,
        the current device will be used.

    Returns
    -------
    Self
        A copy of this object on the specified GPU device.
    """
    device = Device("gpu", index)
    return self.to_device(device)

register classmethod

register(obj_type: type)

Register a function defining the action of the Functional on a specific object type.

This method returns a decorator. The decorated function should accept the functional instance as its first argument and an object of obj_type as its second argument. Any keyword arguments passed to invoke() are forwarded to the decorated function.

Dispatch is resolved at call time via MRO, so only the exact (obj_type, cls) key is stored here. Resolution later searches both:

  • the MRO of the runtime object type,
  • the MRO of the runtime functional type.

This means registrations on a functional superclass are inherited by subclass functionals unless a more specific registration overrides them.

Parameters:

Name Type Description Default
obj_type type

The type of object the function applies to.

required

Returns:

Type Description
Callable

A decorator that registers the function for the specified object type.

Examples:

@MyFunctional.register(MyObject)
def _(functional: MyFunctional, obj: MyObject) -> MyObject:
    ...
Source code in src/qten/abstracts.py
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
@classmethod
def register(cls, obj_type: type):
    """
    Register a function defining the action of the [`Functional`][qten.abstracts.Functional] on a specific object type.

    This method returns a decorator. The decorated function should accept
    the functional instance as its first argument and an object of
    `obj_type` as its second argument. Any keyword arguments passed to
    [`invoke()`][qten.abstracts.Functional.invoke] are forwarded to the
    decorated function.

    Dispatch is resolved at call time via MRO, so only the exact
    `(obj_type, cls)` key is stored here. Resolution later searches both:

    - the MRO of the runtime object type,
    - the MRO of the runtime functional type.

    This means registrations on a functional superclass are inherited by
    subclass functionals unless a more specific registration overrides them.

    Parameters
    ----------
    obj_type : type
        The type of object the function applies to.

    Returns
    -------
    Callable
        A decorator that registers the function for the specified object type.

    Examples
    --------
    ```python
    @MyFunctional.register(MyObject)
    def _(functional: MyFunctional, obj: MyObject) -> MyObject:
        ...
    ```
    """

    def decorator(func: Callable):
        cls._registered_methods[(obj_type, cls)] = func
        cls._invalidate_resolved_methods(obj_type)
        return func

    return decorator

get_applicable_types staticmethod

get_applicable_types() -> tuple[type, ...]

Get all object types that can be applied by this Functional.

Parameters:

Name Type Description Default
cls Type[Functional]

Functional class whose direct registrations should be inspected.

required

Returns:

Type Description
Tuple[Type, ...]

A tuple of all registered object types that this Functional can handle.

Source code in src/qten/abstracts.py
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
@staticmethod
def get_applicable_types(cls) -> Tuple[Type, ...]:
    """
    Get all object types that can be applied by this [`Functional`][qten.abstracts.Functional].

    Parameters
    ----------
    cls : Type[Functional]
        Functional class whose direct registrations should be inspected.

    Returns
    -------
    Tuple[Type, ...]
        A tuple of all registered object types that this [`Functional`][qten.abstracts.Functional] can handle.
    """
    types = set()
    for obj_type, functional_type in cls._registered_methods.keys():
        if functional_type is cls:
            types.add(obj_type)
    return tuple(types)

allows

allows(obj: Any) -> bool

Check if this Functional can be applied on the given object.

Parameters:

Name Type Description Default
obj Any

The object to check for applicability.

required

Returns:

Type Description
bool

True if this Functional can be applied on the object, False otherwise.

Notes

Applicability is checked using the same inherited dispatch rules as invoke(): both the object's MRO and the functional-class MRO are searched.

Source code in src/qten/abstracts.py
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
def allows(self, obj: Any) -> bool:
    """
    Check if this [`Functional`][qten.abstracts.Functional] can be applied on the given object.

    Parameters
    ----------
    obj : Any
        The object to check for applicability.

    Returns
    -------
    bool
        True if this [`Functional`][qten.abstracts.Functional] can be applied on the object, False otherwise.

    Notes
    -----
    Applicability is checked using the same inherited dispatch rules as
    [`invoke()`][qten.abstracts.Functional.invoke]: both the object's MRO
    and the functional-class MRO are searched.
    """
    return self._resolve_method(type(obj), type(self)) is not None

invoke

invoke(obj: Any, **kwargs) -> Any

Apply this functional to obj using registered multimethod dispatch.

Parameters:

Name Type Description Default
obj Any

Runtime object to dispatch on.

required
**kwargs Any

Additional keyword arguments forwarded to the resolved implementation.

{}

Returns:

Type Description
Any

Result produced by the resolved registered method.

Raises:

Type Description
NotImplementedError

If no registration exists for the runtime pair (type(obj), type(self)) after MRO fallback.

Source code in src/qten/abstracts.py
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
def invoke(self, obj: Any, **kwargs) -> Any:
    """
    Apply this functional to `obj` using registered multimethod dispatch.

    Parameters
    ----------
    obj : Any
        Runtime object to dispatch on.
    **kwargs : Any
        Additional keyword arguments forwarded to the resolved
        implementation.

    Returns
    -------
    Any
        Result produced by the resolved registered method.

    Raises
    ------
    NotImplementedError
        If no registration exists for the runtime pair
        `(type(obj), type(self))` after MRO fallback.
    """
    functional_class = type(self)
    obj_class = type(obj)
    method = self._resolve_method(obj_class, functional_class)

    if method is None:
        raise NotImplementedError(
            f"No function registered for {obj_class.__name__} "
            f"with {functional_class.__name__}"
        )

    return method(self, obj, **kwargs)

nograd_tensors

nograd_tensors(
    *names: str,
) -> Callable[[ModuleType], ModuleType]

Mark Tensor attributes as no-grad module state.

The decorator stores the provided attribute names on the target class in the __nograd_tensors__ class attribute. When an instance later assigns a Tensor to one of those names, Module.__setattr__() copies detached data into module-owned buffer storage rather than registering it as a PyTorch parameter.

The annotation is inherited. Decorating a subclass extends the inherited set rather than replacing it.

Parameters:

Name Type Description Default
*names str

Attribute names whose assigned Tensor values should be stored as module buffers rather than nn.Parameter instances.

()

Returns:

Type Description
Callable[[ModuleType], ModuleType]

A class decorator that annotates the target Module subclass in place and returns that same class.

Examples:

@nograd_tensors("basis", "projector")
class MyModule(Module):
    ...
Source code in src/qten/optim.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
def nograd_tensors(*names: str) -> Callable[[ModuleType], ModuleType]:
    """
    Mark [`Tensor`][qten.linalg.tensors.Tensor] attributes as no-grad module state.

    The decorator stores the provided attribute names on the target class in the
    `__nograd_tensors__` class attribute. When an instance later assigns a
    [`Tensor`][qten.linalg.tensors.Tensor] to one of those names,
    [`Module.__setattr__()`][qten.optim.Module.__setattr__] copies detached data
    into module-owned buffer storage rather than registering it as a PyTorch
    parameter.

    The annotation is inherited. Decorating a subclass extends the inherited set
    rather than replacing it.

    Parameters
    ----------
    *names : str
        Attribute names whose assigned [`Tensor`][qten.linalg.tensors.Tensor]
        values should be stored as module buffers rather than `nn.Parameter`
        instances.

    Returns
    -------
    Callable[[ModuleType], ModuleType]
        A class decorator that annotates the target [`Module`][qten.optim.Module] subclass in place
        and returns that same class.

    Examples
    --------
    ```python
    @nograd_tensors("basis", "projector")
    class MyModule(Module):
        ...
    ```
    """

    def annotate(cls: ModuleType) -> ModuleType:
        inherited_names = getattr(cls, "__nograd_tensors__", ())
        cls.__nograd_tensors__ = frozenset((*inherited_names, *names))
        return cls

    return annotate