Integrations#

This article describes how adaptix is workings with other packages and systems.

Supported model kinds#

Models are classes that have a predefined set of fields. Adaptix process models in the same, consistent way.

Models that are supported out of the box:

Arbitrary types also are supported to be loaded by introspection of __init__ method, but it can not be dumped.

You do not need to do anything to enable support for models from a third-party library. Everything just works. But you can install adaptix with certain extras to ensure version compatibility.

Due to the way Python works with annotations, there is a bug, when field annotation of TypedDict is stringified or from __future__ import annotations is placed in file Required and NotRequired specifiers is ignored when required_keys and optional_keys is calculated. Adaptix takes this into account and processes it properly.

Known peculiarities and limitations#

dataclass#

  • Signature of custom __init__ method must be same as signature of generated by @dataclass, because there is no way to distinguish them.

__init__ introspection or using constructor#

  • Fields of unpacked typed dict (**kwargs: Unpack[YourTypedDict]) cannot collide with parameters of function.

sqlalchemy#

  • Only mapping to Table is supported, implementations for FromClause instances such as Subquery and Join are not provided.

  • dataclass and attrs mapped by sqlalchemy are not supported for introspection.

  • It does not support registering order of mapped fields by design, so you should use manual mapping to list instead automatic as_list=True.

  • Relationships with custom collection_class are not supported.

  • All input fields of foreign keys and relationships are considered as optional due to user can pass only relationship instance or only foreign key value.

pydantic#

  • Custom __init__ function must have only one parameter accepting arbitrary keyword arguments (like **kwargs or **data).

  • There are 3 category of fields: regular fields, computed fields (marked properties) and private attributes. Pydantic tracks order inside one category, but does not track between categories. Also, pydantic does not keep right order inside private attributes.

    Therefore, during the dumping of fields, regular fields will come first, followed by computed fields, and then private attributes. You can use use manual mapping to list instead automatic as_list=True to control the order.

  • Fields with constraints defined by parameters (like f1: int = Field(gt=1, ge=10)) are translated to Annotated with corresponding metadata. Metadata is generated by Pydantic and consists of objects from annotated_types package (like Annotated[int, Gt(gt=1), Ge(ge=10)]).

  • Parametrized generic pydantic models do not expose common type hints dunders that prevents appropriate type hints introspection. This leads to incorrect generics resolving in some tricky cases.

    Also, there are some bugs in generic resolving inside pydantic itself.

  • Pydantic does not support variadic generics.

  • pydantic.dataclasses is not supported.

  • pydantic.v1 is not supported.

Working with Pydantic#

By default, any pydantic model is loaded and dumped like any other model. For example, any aliases or config parameters defined inside the model are ignored. You can override this behavior to use a native pydantic validation/serialization mechanism.

from pydantic import BaseModel, Field

from adaptix import Retort
from adaptix.integrations.pydantic import native_pydantic


class Book(BaseModel):
    title: str = Field(alias="name")
    price: int


data = {
    "name": "Fahrenheit 451",
    "price": 100,
}

retort = Retort(
    recipe=[
        native_pydantic(Book),
    ],
)

book = retort.load(data, Book)
assert book == Book(name="Fahrenheit 451", price=100)
assert retort.dump(book) == data