Adaptix#

Overview#

Adaptix is an extremely flexible and configurable data model conversion library.

Important

It is ready for production!

The beta version only means there may be some backward incompatible changes, so you need to pin a specific version.

Installation#

pip install adaptix==3.0.0b5

Example#

Model loading and dumping#
from dataclasses import dataclass

from adaptix import Retort


@dataclass
class Book:
    title: str
    price: int


data = {
    "title": "Fahrenheit 451",
    "price": 100,
}

# Retort is meant to be global constant or just one-time created
retort = Retort()

book = retort.load(data, Book)
assert book == Book(title="Fahrenheit 451", price=100)
assert retort.dump(book) == data
Conversion one model to another#
from dataclasses import dataclass

from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column

from adaptix.conversion import get_converter


class Base(DeclarativeBase):
    pass


class Book(Base):
    __tablename__ = "books"

    id: Mapped[int] = mapped_column(primary_key=True)
    title: Mapped[str]
    price: Mapped[int]


@dataclass
class BookDTO:
    id: int
    title: str
    price: int


convert_book_to_dto = get_converter(Book, BookDTO)

assert (
    convert_book_to_dto(Book(id=183, title="Fahrenheit 451", price=100))
    ==
    BookDTO(id=183, title="Fahrenheit 451", price=100)
)

Requirements#

  • Python 3.8+

Use cases#

  • Validation and transformation of received data for your API.

  • Conversion between data models and DTOs.

  • Config loading/dumping via codec that produces/takes dict.

  • Storing JSON in a database and representing it as a model inside the application code.

  • Creating API clients that convert a model to JSON sending to the server.

  • Persisting entities at cache storage.

  • Implementing fast and primitive ORM.

Advantages#

  • Sane defaults for JSON processing, no configuration is needed for simple cases.

  • Separated model definition and rules of conversion that allow preserving SRP and have different representations for one model.

  • Speed. It is one of the fastest data parsing and serialization libraries.

  • There is no forced model representation, adaptix can adjust to your needs.

  • Support dozens of types, including different model kinds: @dataclass, TypedDict, NamedTuple, attrs, sqlalchemy and pydantic.

  • Working with self-referenced data types (such as linked lists or trees).

  • Saving path where an exception is raised (including unexpected errors).

  • Machine-readable errors that could be dumped.

  • Support for user-defined generic models.

  • Automatic name style conversion (e.g. snake_case to camelCase).

  • Predicate system that allows to concisely and precisely override some behavior.

  • Disabling additional checks to speed up data loading from trusted sources.

  • No auto casting by default. The loader does not try to guess value from plenty of input formats.

Further reading#

See loading and dumping tutorial and conversion tutorial for details about library usage.

Benchmarks#

Measure principles#

These benchmarks aim to make a complete, fair, and reliable comparison between different libraries among different versions of Python.

If you find a mistake in benchmarking methods or you want to add another library to the comparison create a new issue.

All benchmarks are made via pyperf – an advanced library used to measure the performance of Python interpreters. It takes care of calibration, warming up, and gauging.

To handle a vast number of benchmarks variations and make pyperf API more convenient new internal framework was created. It adds no overhead and is intended only to orchestrate pyperf runs.

All measurements exclude the time required to initialize and generate the conversion function.

Each library is tested with different options that may affect performance.

All benchmarks listed below were produced with libraries:

Library

Used version

Last version

adaptix

3.0.0a6

https://img.shields.io/pypi/v/adaptix?logo=pypi&label=%20&color=white&style=flat https://img.shields.io/pypi/v/adaptix?logo=pypi&label=%20&color=%23242424&style=flat

cattrs

23.1.2

https://img.shields.io/pypi/v/cattrs?logo=pypi&label=%20&color=white&style=flat https://img.shields.io/pypi/v/cattrs?logo=pypi&label=%20&color=%23242424&style=flat

dataclass_factory

2.16

https://img.shields.io/pypi/v/dataclass_factory?logo=pypi&label=%20&color=white&style=flat https://img.shields.io/pypi/v/dataclass_factory?logo=pypi&label=%20&color=%23242424&style=flat

marshmallow

3.20.1

https://img.shields.io/pypi/v/marshmallow?logo=pypi&label=%20&color=white&style=flat https://img.shields.io/pypi/v/marshmallow?logo=pypi&label=%20&color=%23242424&style=flat

mashumaro

3.10

https://img.shields.io/pypi/v/mashumaro?logo=pypi&label=%20&color=white&style=flat https://img.shields.io/pypi/v/mashumaro?logo=pypi&label=%20&color=%23242424&style=flat

msgspec

0.18.4

https://img.shields.io/pypi/v/msgspec?logo=pypi&label=%20&color=white&style=flat https://img.shields.io/pypi/v/msgspec?logo=pypi&label=%20&color=%23242424&style=flat

pydantic

2.4.2

https://img.shields.io/pypi/v/pydantic?logo=pypi&label=%20&color=white&style=flat https://img.shields.io/pypi/v/pydantic?logo=pypi&label=%20&color=%23242424&style=flat

schematics

2.1.1

https://img.shields.io/pypi/v/schematics?logo=pypi&label=%20&color=white&style=flat https://img.shields.io/pypi/v/schematics?logo=pypi&label=%20&color=%23242424&style=flat

Benchmarks analysis#

Important

Serializing and deserializing libraries have a lot of options that customize the conversion process. These parameters may greatly affect performance but there is no way to create benchmarks for each combination of these options. So, performance for your specific case may be different.

Simple Structures (loading)#

This benchmark examines the loading of basic structures natively supported by all the libraries.

The library has to produce models from dict:

from dataclasses import dataclass
from typing import List


@dataclass
class Review:
    id: int
    title: str
    rating: float
    content: str  # renamed to 'text'


@dataclass
class Book:
    id: int
    name: str
    reviews: List[Review]  # contains 100 items

Source Code Raw data

Cases description

adaptix

dt_all, dt_first and dt_disable expresses that debug_trail parameter of Retort set to DebugTrail.ALL, DebugTrail.FIRST, DebugTrail.DISABLE (doc)

sc refers to that strict_coercion option of Retort is activated (doc)

msgspec

strict implies that parameter strict at convert is enabled (doc)

no_gc points to that models have disabled gc option (doc)

cattrs

dv indicates that Converter option detailed_validation is enabled (doc)

dataclass_factory

dp denotes that parameter debug_path of Factory is set to True (doc)

mashumaro

lc signifies that lazy_compilation flag of model Config is activated (doc)

pydantic

strict means that parameter strict at model_config is turned on (doc)

Notes about implementation:

  • marshmallow can not create an instance of dataclass or another model, so, @post_load hook was used (doc)

  • msgspec can not be built for pypy


Simple Structures (dumping)#

This benchmark studies the dumping of basic structures natively supported by all the libraries.

The library has to convert the model instance to dict used at loading benchmark:

from dataclasses import dataclass
from typing import List


@dataclass
class Review:
    id: int
    title: str
    rating: float
    content: str  # renamed to 'text'


@dataclass
class Book:
    id: int
    name: str
    reviews: List[Review]  # contains 100 items

Source Code Raw data

Cases description

adaptix

dt_all, dt_first and dt_disable expresses that debug_trail parameter of Retort set to DebugTrail.ALL, DebugTrail.FIRST, DebugTrail.DISABLE (doc)

msgspec

no_gc points to that models have disabled gc option (doc)

cattrs

dv indicates that Converter option detailed_validation is enabled (doc)

mashumaro

lc signifies that lazy_compilation flag of model Config is activated (doc)

pydantic

strict means that parameter strict at model_config is turned on (doc)

asdict

standard library function dataclasses.asdict was used

Notes about implementation:

  • asdict does not support renaming, produced dict contains the original field name

  • msgspec can not be built for pypy

  • pydantic requires using json mode of model_dump method to produce json serializable dict (doc)


GitHub Issues (loading)#

This benchmark examines libraries using real-world examples. It involves handling a slice of a CPython repository issues snapshot fetched via the GitHub REST API.

The library has to produce models from dict:

Processed models

The original endpoint returns an array of objects. Some libraries have no sane way to process a list of models, so root level list wrapped with GetRepoIssuesResponse model.

These models represent most of the fields returned by the endpoint, but some data are skipped. For example, milestone is missed out, because the CPython repo does not use it.

from dataclasses import dataclass
from datetime import datetime
from enum import Enum
from typing import List, Optional


class IssueState(str, Enum):
    OPEN = "open"
    CLOSED = "closed"


class StateReason(str, Enum):
    COMPLETED = "completed"
    REOPENED = "reopened"
    NOT_PLANNED = "not_planned"


class AuthorAssociation(str, Enum):
    COLLABORATOR = "COLLABORATOR"
    CONTRIBUTOR = "CONTRIBUTOR"
    FIRST_TIMER = "FIRST_TIMER"
    FIRST_TIME_CONTRIBUTOR = "FIRST_TIME_CONTRIBUTOR"
    MANNEQUIN = "MANNEQUIN"
    MEMBER = "MEMBER"
    NONE = "NONE"
    OWNER = "OWNER"


@dataclass
class SimpleUser:
    login: str
    id: int
    node_id: str
    avatar_url: str
    gravatar_id: Optional[str]
    url: str
    html_url: str
    followers_url: str
    following_url: str
    gists_url: str
    starred_url: str
    subscriptions_url: str
    organizations_url: str
    repos_url: str
    events_url: str
    received_events_url: str
    type: str
    site_admin: bool
    name: Optional[str] = None
    email: Optional[str] = None
    starred_at: Optional[datetime] = None


@dataclass
class Label:
    id: int
    node_id: str
    url: str
    name: str
    description: Optional[str]
    color: str
    default: bool


@dataclass
class Reactions:
    url: str
    total_count: int
    plus_one: int  # renamed to '+1'
    minus_one: int  # renamed to '-1'
    laugh: int
    confused: int
    heart: int
    hooray: int
    eyes: int
    rocket: int


@dataclass
class PullRequest:
    diff_url: Optional[str]
    html_url: Optional[str]
    patch_url: Optional[str]
    url: Optional[str]
    merged_at: Optional[datetime] = None


@dataclass
class Issue:
    id: int
    node_id: str
    url: str
    repository_url: str
    labels_url: str
    comments_url: str
    events_url: str
    html_url: str
    number: int
    state: IssueState
    state_reason: Optional[StateReason]
    title: str
    user: Optional[SimpleUser]
    labels: List[Label]
    assignee: Optional[SimpleUser]
    assignees: Optional[List[SimpleUser]]
    locked: bool
    active_lock_reason: Optional[str]
    comments: int
    closed_at: Optional[datetime]
    created_at: Optional[datetime]
    updated_at: Optional[datetime]
    author_association: AuthorAssociation
    reactions: Optional[Reactions] = None
    pull_request: Optional[PullRequest] = None
    body_html: Optional[str] = None
    body_text: Optional[str] = None
    timeline_url: Optional[str] = None
    body: Optional[str] = None


@dataclass
class GetRepoIssuesResponse:
    data: List[Issue]

Source Code Raw data

Cases description

adaptix

dt_all, dt_first and dt_disable expresses that debug_trail parameter of Retort set to DebugTrail.ALL, DebugTrail.FIRST, DebugTrail.DISABLE (doc)

sc refers to that strict_coercion option of Retort is activated (doc)

msgspec

strict implies that parameter strict at convert is enabled (doc)

no_gc points to that models have disabled gc option (doc)

cattrs

dv indicates that Converter option detailed_validation is enabled (doc)

dataclass_factory

dp denotes that parameter debug_path of Factory is set to True (doc)

mashumaro

lc signifies that lazy_compilation flag of model Config is activated (doc)

Notes about implementation:

  • marshmallow can not create an instance of dataclass or another model, so, @post_load hook was used (doc)

  • msgspec can not be built for pypy

  • pydantic strict mode accepts only enum instances for the enum field, so, it cannot be used at this benchmark (doc)

  • cattrs can not process datetime out of the box. Custom structure hook lambda v, tp: datetime.fromisoformat(v) was used. This function does not generate a descriptive error, therefore production implementation could be slower.


GitHub Issues (dumping)#

This benchmark examines libraries using real-world examples. It involves handling a slice of a CPython repository issues snapshot fetched via the GitHub REST API.

The library has to convert the model instance to dict used at loading benchmark:

Processed models

The original endpoint returns an array of objects. Some libraries have no sane way to process a list of models, so root level list wrapped with GetRepoIssuesResponse model.

These models represent most of the fields returned by the endpoint, but some data are skipped. For example, milestone is missed out, because the CPython repo does not use it.

GitHub API distinct nullable fields and optional fields. So, default values must be omitted at dumping, but fields with type Optional[T] without default must always be presented

from dataclasses import dataclass
from datetime import datetime
from enum import Enum
from typing import List, Optional


class IssueState(str, Enum):
    OPEN = "open"
    CLOSED = "closed"


class StateReason(str, Enum):
    COMPLETED = "completed"
    REOPENED = "reopened"
    NOT_PLANNED = "not_planned"


class AuthorAssociation(str, Enum):
    COLLABORATOR = "COLLABORATOR"
    CONTRIBUTOR = "CONTRIBUTOR"
    FIRST_TIMER = "FIRST_TIMER"
    FIRST_TIME_CONTRIBUTOR = "FIRST_TIME_CONTRIBUTOR"
    MANNEQUIN = "MANNEQUIN"
    MEMBER = "MEMBER"
    NONE = "NONE"
    OWNER = "OWNER"


@dataclass
class SimpleUser:
    login: str
    id: int
    node_id: str
    avatar_url: str
    gravatar_id: Optional[str]
    url: str
    html_url: str
    followers_url: str
    following_url: str
    gists_url: str
    starred_url: str
    subscriptions_url: str
    organizations_url: str
    repos_url: str
    events_url: str
    received_events_url: str
    type: str
    site_admin: bool
    name: Optional[str] = None
    email: Optional[str] = None
    starred_at: Optional[datetime] = None


@dataclass
class Label:
    id: int
    node_id: str
    url: str
    name: str
    description: Optional[str]
    color: str
    default: bool


@dataclass
class Reactions:
    url: str
    total_count: int
    plus_one: int  # renamed to '+1'
    minus_one: int  # renamed to '-1'
    laugh: int
    confused: int
    heart: int
    hooray: int
    eyes: int
    rocket: int


@dataclass
class PullRequest:
    diff_url: Optional[str]
    html_url: Optional[str]
    patch_url: Optional[str]
    url: Optional[str]
    merged_at: Optional[datetime] = None


@dataclass
class Issue:
    id: int
    node_id: str
    url: str
    repository_url: str
    labels_url: str
    comments_url: str
    events_url: str
    html_url: str
    number: int
    state: IssueState
    state_reason: Optional[StateReason]
    title: str
    user: Optional[SimpleUser]
    labels: List[Label]
    assignee: Optional[SimpleUser]
    assignees: Optional[List[SimpleUser]]
    locked: bool
    active_lock_reason: Optional[str]
    comments: int
    closed_at: Optional[datetime]
    created_at: Optional[datetime]
    updated_at: Optional[datetime]
    author_association: AuthorAssociation
    reactions: Optional[Reactions] = None
    pull_request: Optional[PullRequest] = None
    body_html: Optional[str] = None
    body_text: Optional[str] = None
    timeline_url: Optional[str] = None
    body: Optional[str] = None


@dataclass
class GetRepoIssuesResponse:
    data: List[Issue]

Source Code Raw data

Cases description

adaptix

dt_all, dt_first and dt_disable expresses that debug_trail parameter of Retort set to DebugTrail.ALL, DebugTrail.FIRST, DebugTrail.DISABLE (doc)

msgspec

no_gc points to that models have disabled gc option (doc)

cattrs

dv indicates that Converter option detailed_validation is enabled (doc)

mashumaro

lc signifies that lazy_compilation flag of model Config is activated (doc)

pydantic

strict means that parameter strict at model_config is turned on (doc)

asdict

standard library function dataclasses.asdict was used

Notes about implementation:

  • asdict does not support renaming, produced dict contains the original field name

  • msgspec can not be built for pypy

  • pydantic requires using json mode of model_dump method to produce json serializable dict (doc)

  • cattrs can not process datetime out of the box. Custom unstructure hook datetime.isoformat was used.

  • marshmallow can not skip None values for specific fields out of the box. @post_dump is used to remove these fields.

Tutorial#

Adaptix analyzes your type hints and generates corresponding transformers based on the retrieved information. You can flexibly tune the conversion process following DRY principle.

Installation#

Just use pip to install the library

pip install adaptix==3.0.0b5

Integrations with 3-rd party libraries are turned on automatically, but you can install adaptix with extras to check that versions are compatible.

There are two variants of extras. The first one checks that the version is the same or newer than the last supported, the second (strict) additionally checks that the version same or older than the last tested version.

Extras

Versions bound

attrs

attrs >= 21.3.0

attrs-strict

attrs >= 21.3.0, <= 23.2.0

sqlalchemy

sqlalchemy >= 2.0.0

sqlalchemy-strict

sqlalchemy >= 2.0.0, <= 2.0.29

pydantic

pydantic >= 2.0.0

pydantic-strict

pydantic >= 2.0.0, <= 2.7.0

Extras are specified inside square brackets, separating by comma.

So, this is valid installation variants:

pip install adaptix[attrs-strict]==3.0.0b5
pip install adaptix[attrs, sqlalchemy-strict]==3.0.0b5

Introduction#

The central object of the library is Retort. It can create models from mapping (loading) and create mappings from the model (dumping).

from dataclasses import dataclass

from adaptix import Retort


@dataclass
class Book:
    title: str
    price: int


data = {
    "title": "Fahrenheit 451",
    "price": 100,
}

# Retort is meant to be global constant or just one-time created
retort = Retort()

book = retort.load(data, Book)
assert book == Book(title="Fahrenheit 451", price=100)
assert retort.dump(book) == data

All typing information is retrieved from your annotations, so is not required from you to provide any additional schema or even change your dataclass decorators or class bases.

In the provided example book.author == "Unknown author" because normal dataclass constructor is called.

It is better to create a retort only once because all loaders are cached inside it after the first usage. Otherwise, the structure of your classes will be analyzed again and again for every new instance of Retort.

If you don’t need any customization, you can use the predefined load and dump functions.

Nested objects#

Nested objects are supported out of the box. It is surprising, but you do not have to do anything except define your dataclasses. For example, you expect that the author of the Book is an instance of a Person, but in the dumped form it is a dictionary.

Declare your dataclasses as usual and then just load your data.

from dataclasses import dataclass

from adaptix import Retort


@dataclass
class Person:
    name: str


@dataclass
class Book:
    title: str
    price: int
    author: Person


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "author": {
        "name": "Ray Bradbury",
    },
}

retort = Retort()

book: Book = retort.load(data, Book)
assert book == Book(title="Fahrenheit 451", price=100, author=Person("Ray Bradbury"))
assert retort.dump(book) == data

Lists and other collections#

Want to load a collection of dataclasses? No changes are required, just specify the correct target type (e.g List[SomeClass] or Dict[str, SomeClass]).

from dataclasses import dataclass
from typing import List

from adaptix import Retort


@dataclass
class Book:
    title: str
    price: int


data = [
    {
        "title": "Fahrenheit 451",
        "price": 100,
    },
    {
        "title": "1984",
        "price": 100,
    },
]

retort = Retort()
books = retort.load(data, List[Book])
assert books == [Book(title="Fahrenheit 451", price=100), Book(title="1984", price=100)]
assert retort.dump(books, List[Book]) == data

Fields also can contain any supported collections.

Retort configuration#

There are two parameters that Retort constructor takes.

debug_trail is responsible for saving the place where the exception was caused. By default, retort saves all raised errors (including unexpected ones) and the path to them. If data is loading or dumping from a trusted source where an error is unlikely, you can change this behavior to saving only the first error with trail or without trail. It will slightly improve performance if no error is caused and will have more impact if an exception is raised. More details about working with the saved trail in Error handling

strict_coercion affects only the loading process. If it is enabled (this is the default state) type will be converted only two conditions passed:

  1. There is only one way to produce casting

  2. No information will be lost

So this mode forbids converting dict to list (dict values will be lost), forbids converting str to int (we do not know which base must be used), but allows to converting str to Decimal (base always is 10 by definition).

Strict coercion requires additional type checks before calling the main constructor, therefore disabling it can improve performance.

Retort recipe#

Retort also supports a more powerful and more flexible configuration system via recipe. It implements chain-of-responsibility design pattern. The recipe consists of providers, each of which can precisely override one of the retort’s behavior aspects.

from dataclasses import dataclass
from datetime import datetime, timezone

from adaptix import Retort, loader


@dataclass
class Book:
    title: str
    price: int
    created_at: datetime


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "created_at": 1674938508.599962,
}

retort = Retort(
    recipe=[
        loader(datetime, lambda x: datetime.fromtimestamp(x, tz=timezone.utc)),
    ],
)

book = retort.load(data, Book)
assert book == Book(
    title="Fahrenheit 451",
    price=100,
    created_at=datetime(2023, 1, 28, 20, 41, 48, 599962, tzinfo=timezone.utc),
)

Default datetime loader accepts only str in ISO 8601 format, loader(datetime, lambda x: datetime.fromtimestamp(x, tz=timezone.utc)) replaces it with a specified lambda function that takes int representing Unix time.

Same example but with a dumper
from dataclasses import dataclass
from datetime import datetime, timezone

from adaptix import Retort, dumper, loader


@dataclass
class Book:
    title: str
    price: int
    created_at: datetime


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "created_at": 1674938508.599962,
}

retort = Retort(
    recipe=[
        loader(datetime, lambda x: datetime.fromtimestamp(x, tz=timezone.utc)),
        dumper(datetime, lambda x: x.timestamp()),
    ],
)

book = retort.load(data, Book)
assert book == Book(
    title="Fahrenheit 451",
    price=100,
    created_at=datetime(2023, 1, 28, 20, 41, 48, 599962, tzinfo=timezone.utc),
)
assert retort.dump(book) == data

Providers at the start of the recipe have higher priority because they overlap subsequent ones.

from dataclasses import dataclass

from adaptix import Retort, loader


@dataclass
class Foo:
    value: int


def add_one(data):
    return data + 1


def add_two(data):
    return data + 2


retort = Retort(
    recipe=[
        loader(int, add_one),
        loader(int, add_two),
    ],
)

assert retort.load({"value": 10}, Foo) == Foo(11)

Basic providers overview#

The list of providers is not limited to loader and dumper, there are a lot of other high-level helpers. Here are some of them.

  1. constructor creates a loader that extracts data from dict and passes it to the given function.

  2. name_mapping renames and skips model fields for the outside world. You can change the naming convention to camelCase via the name_style parameter or rename individual fields via map.

  3. with_property allows dumping properties of the model like other fields.

  4. enum_by_exact_value is the default behavior for all enums. It uses enum values without any conversions to represent enum cases.

  5. enum_by_name allows representing enums by their names.

  6. enum_by_value takes the type of enum values and uses it to load or dump enum cases.

Predicate system#

So far all examples use classes to apply providers but you can specify other conditions. There is a single predicate system that is used by most of the builtins providers.

Basic rules:

  1. If you pass a class, the provider will be applied to all same types.

  2. If you pass an abstract class, the provider will be applied to all subclasses.

  3. If you pass a runtime checkable protocol, the provider will be applied to all protocol implementations.

  4. If you pass a string, it will be interpreted as a regex and the provider will be applied to all fields with id matched by the regex. In most cases, field_id is the name of the field at class definition. Any field_id must be a valid python identifier, so if you pass the field_id directly, it will match an equal string.

Using string directly for predicate often is inconvenient because it matches fields with the same name in all models. So there special helper for this case.

from dataclasses import dataclass
from datetime import datetime, timezone
from typing import List

from adaptix import P, Retort, loader


@dataclass
class Person:
    id: int
    name: str
    created_at: datetime


@dataclass
class Book:
    name: str
    price: int
    created_at: datetime


@dataclass
class Bookshop:
    workers: List[Person]
    books: List[Book]


data = {
    "workers": [
        {
            "id": 193,
            "name": "Kate",
            "created_at": "2023-01-29T21:26:28.026860+00:00",
        },
    ],
    "books": [
        {
            "name": "Fahrenheit 451",
            "price": 100,
            "created_at": 1674938508.599962,
        },
    ],
}

retort = Retort(
    recipe=[
        loader(P[Book].created_at, lambda x: datetime.fromtimestamp(x, tz=timezone.utc)),
    ],
)

bookshop = retort.load(data, Bookshop)

assert bookshop == Bookshop(
    workers=[
        Person(
            id=193,
            name="Kate",
            created_at=datetime(2023, 1, 29, 21, 26, 28, 26860, tzinfo=timezone.utc),
        ),
    ],
    books=[
        Book(
            name="Fahrenheit 451",
            price=100,
            created_at=datetime(2023, 1, 28, 20, 41, 48, 599962, tzinfo=timezone.utc),
        ),
    ],
)

P represents pattern of path at structure definition. P[Book].created_at will match field created_at only if it placed inside model Book

Some facts about P:

  1. P['name'] is the same as P.name

  2. P[Foo] is the same as Foo predicate

  3. P[Foo] + P.name is the same as P[Foo].name

  4. P[Foo, Bar] matches class Foo or class Bar

  5. P could be combined via |, &, ^, also it can be reversed using ~

  6. P can be expanded without limit. P[Foo].name[Bar].age is valid and matches field age located at model Bar, situated at field name, placed at model Foo

Retort extension and combination#

No changes can be made after the retort creation. You can only make new retort object based on the existing one

replace method using to change scalar options debug_trail and strict_coercion

from adaptix import DebugTrail, Retort

external_retort = Retort(
    recipe=[
        # very complex configuration
    ],
)

# create retort to faster load data from an internal trusted source
# where it already validated
internal_retort = external_retort.replace(
    strict_coercion=False,
    debug_trail=DebugTrail.DISABLE,
)

extend method adds items to the recipe beginning. This allows following the DRY principle.

from datetime import datetime

from adaptix import Retort, dumper, loader

base_retort = Retort(
    recipe=[
        loader(datetime, datetime.fromtimestamp),
        dumper(datetime, datetime.timestamp),
    ],
)

specific_retort1 = base_retort.extend(
    recipe=[
        loader(bytes, bytes.hex),
        loader(bytes, bytes.fromhex),
    ],
)

# same as

specific_retort2 = Retort(
    recipe=[
        loader(bytes, bytes.hex),
        loader(bytes, bytes.fromhex),
        loader(datetime, datetime.fromtimestamp),
        dumper(datetime, datetime.timestamp),
    ],
)

You can include one retort to another, it allows to separate creation of loaders and dumpers for specific types into isolated layers.

from dataclasses import dataclass
from datetime import datetime, timezone
from enum import Enum
from typing import List

from adaptix import Retort, bound, dumper, enum_by_name, loader


class LiteraryGenre(Enum):
    DRAMA = 1
    FOLKLORE = 2
    POETRY = 3
    PROSE = 4


@dataclass
class LiteraryWork:
    id: int
    name: str
    genre: LiteraryGenre
    uploaded_at: datetime


literature_retort = Retort(
    recipe=[
        loader(datetime, lambda x: datetime.fromtimestamp(x, tz=timezone.utc)),
        dumper(datetime, lambda x: x.timestamp()),
        enum_by_name(LiteraryGenre),
    ],
)


# another module and another abstraction level

@dataclass
class Person:
    name: str
    works: List[LiteraryWork]


retort = Retort(
    recipe=[
        bound(LiteraryWork, literature_retort),
    ],
)

data = {
    "name": "Ray Bradbury",
    "works": [
        {
            "id": 7397,
            "name": "Fahrenheit 451",
            "genre": "PROSE",
            "uploaded_at": 1675111113,
        },
    ],
}

person = retort.load(data, Person)
assert person == Person(
    name="Ray Bradbury",
    works=[
        LiteraryWork(
            id=7397,
            name="Fahrenheit 451",
            genre=LiteraryGenre.PROSE,
            uploaded_at=datetime(2023, 1, 30, 20, 38, 33, tzinfo=timezone.utc),
        ),
    ],
)

In this example, loader and dumper for LiteraryWork will be created by literature_retort (note that debug_trail and strict_coercion options of upper-level retort do not affects inner retorts).

Retort is provider that proxies search into their own recipe, so if you pass retort without a bound wrapper, it will be used for all loaders and dumpers, overriding all subsequent providers.

Provider chaining#

Sometimes you want to add some additional data processing before or after the existing converter instead of fully replacing it. This is called chaining.

The third parameter of loader and dumper control the chaining process. Chain.FIRST means that the result of the given function will be passed to the next matched loader/dumper at the recipe, Chain.LAST marks to apply your function after the one generated by the next provider.

import json
from dataclasses import dataclass
from datetime import datetime

from adaptix import Chain, P, Retort, dumper, loader


@dataclass
class Book:
    title: str
    price: int
    author: str


@dataclass
class Message:
    id: str
    timestamp: datetime
    body: Book


data = {
    "id": "ajsVre",
    "timestamp": "2023-01-29T21:26:28.026860",
    "body": '{"title": "Fahrenheit 451", "price": 100, "author": "Ray Bradbury"}',
}

retort = Retort(
    recipe=[
        loader(P[Message].body, json.loads, Chain.FIRST),
        dumper(P[Message].body, json.dumps, Chain.LAST),
    ],
)

message = retort.load(data, Message)
assert message == Message(
    id="ajsVre",
    timestamp=datetime(2023, 1, 29, 21, 26, 28, 26860),
    body=Book(
        title="Fahrenheit 451",
        price=100,
        author="Ray Bradbury",
    ),
)

Validators#

validator is a convenient wrapper over loader and chaining to create a verifier of input data.

from dataclasses import dataclass

from adaptix import P, Retort, validator
from adaptix.load_error import AggregateLoadError, LoadError, ValidationLoadError


@dataclass
class Book:
    title: str
    price: int


data = {
    "title": "Fahrenheit 451",
    "price": -10,
}

retort = Retort(
    recipe=[
        validator(P[Book].price, lambda x: x >= 0, "value must be greater or equal 0"),
    ],
)

try:
    retort.load(data, Book)
except AggregateLoadError as e:
    assert len(e.exceptions) == 1
    assert isinstance(e.exceptions[0], ValidationLoadError)
    assert e.exceptions[0].msg == "value must be greater or equal 0"


class BelowZeroError(LoadError):
    def __init__(self, actual_value: int):
        self.actual_value = actual_value

    def __str__(self):
        return f"actual_value={self.actual_value}"


retort = Retort(
    recipe=[
        validator(P[Book].price, lambda x: x >= 0, lambda x: BelowZeroError(x)),
    ],
)

try:
    retort.load(data, Book)
except AggregateLoadError as e:
    assert len(e.exceptions) == 1
    assert isinstance(e.exceptions[0], BelowZeroError)
    assert e.exceptions[0].actual_value == -10

If the test function returns False, the exception will be raised. You can pass an exception factory that returns the actual exception or pass the string to raise ValidationError instance.

Traceback of raised errors
+ Exception Group Traceback (most recent call last):
|   File "/.../docs/examples/tutorial/validators.py", line 24, in <module>
|     retort.load(data, Book)
|   File "/.../adaptix/_internal/facade/retort.py", line 278, in load
|     return self.get_loader(tp)(data)
|            ^^^^^^^^^^^^^^^^^^^^^^^^^
|   File "model_loader_Book", line 76, in model_loader_Book
| adaptix.load_error.AggregateLoadError: while loading model <class '__main__.Book'> (1 sub-exception)
+-+---------------- 1 ----------------
  | Traceback (most recent call last):
  |   File "model_loader_Book", line 51, in model_loader_Book
  |   File "/.../adaptix/_internal/provider/provider_wrapper.py", line 86, in chain_processor
  |     return second(first(data))
  |            ^^^^^^^^^^^^^^^^^^^
  |   File "/.../adaptix/_internal/facade/provider.py", line 360, in validating_loader
  |     raise exception_factory(data)
  | adaptix.load_error.ValidationError: msg='value must be greater or equal 0', input_value=-10
  | Exception was caused at ['price']
  +------------------------------------
+ Exception Group Traceback (most recent call last):
|   File "/.../docs/examples/tutorial/validators.py", line 53, in <module>
|     retort.load(data, Book)
|   File "/.../adaptix/_internal/facade/retort.py", line 278, in load
|     return self.get_loader(tp)(data)
|            ^^^^^^^^^^^^^^^^^^^^^^^^^
|   File "model_loader_Book", line 76, in model_loader_Book
| adaptix.load_error.AggregateLoadError: while loading model <class '__main__.Book'> (1 sub-exception)
+-+---------------- 1 ----------------
  | Traceback (most recent call last):
  |   File "model_loader_Book", line 51, in model_loader_Book
  |   File "/.../adaptix/_internal/provider/provider_wrapper.py", line 86, in chain_processor
  |     return second(first(data))
  |            ^^^^^^^^^^^^^^^^^^^
  |   File "/.../adaptix/_internal/facade/provider.py", line 360, in validating_loader
  |     raise exception_factory(data)
  | BelowZero: actual_value=-10
  | Exception was caused at ['price']
  +------------------------------------

Error handling#

All loaders have to throw LoadError to signal invalid input data. Other exceptions mean errors at loaders themselves. All builtin LoadError children have listed at adaptix.load_error subpackage and designed to produce machine-readable structured errors.

from dataclasses import dataclass

from adaptix import Retort
from adaptix.load_error import AggregateLoadError, LoadError


@dataclass
class Book:
    title: str
    price: int
    author: str = "Unknown author"


data = {
    # Field values are mixed up
    "title": 100,
    "price": "Fahrenheit 451",
}

retort = Retort()

try:
    retort.load(data, Book)
except LoadError as e:
    assert isinstance(e, AggregateLoadError)
Traceback of raised error (DebugTrail.ALL)
+ Exception Group Traceback (most recent call last):
|   ...
| adaptix.load_error.AggregateLoadError: while loading model <class '__main__.Book'> (2 sub-exceptions)
+-+---------------- 1 ----------------
  | Traceback (most recent call last):
  |   ...
  | adaptix.load_error.TypeLoadError: expected_type=<class 'int'>, input_value='Fahrenheit 451'
  | Exception was caused at ['price']
  +---------------- 2 ----------------
  | Traceback (most recent call last):
  |   ...
  | adaptix.load_error.TypeLoadError: expected_type=<class 'str'>, input_value=100
  | Exception was caused at ['title']
  +------------------------------------

By default, all thrown errors are collected into AggregateLoadError, each exception has an additional note describing path of place where the error is caused. This path is called a Struct trail and acts like JSONPath pointing to location inside the input data.

For Python versions less than 3.11, an extra package exceptiongroup is used. This package patch some functions from traceback during import to backport ExceptionGroup rendering to early versions. More details at documentation.

By default, all collection-like and model-like loaders wrap all errors into AggregateLoadError. Each sub-exception contains a trail relative to the parent exception.

Non-guaranteed behavior

Order of errors inside AggregateLoadError is not guaranteed.

You can set debug_trail=DebugTrail.FIRST at Retort to raise only the first met error.

Traceback of raised error (DebugTrail.FIRST)
Traceback (most recent call last):
  ...
adaptix.load_error.TypeLoadError: expected_type=<class 'int'>, input_value='Fahrenheit 451'
Exception was caused at ['price']

Changing debug_trail to DebugTrail.DISABLE make the raised exception act like any normal exception.

Traceback of raised error (DebugTrail.DISABLE)
Traceback (most recent call last):
  ...
adaptix.load_error.TypeLoadError: expected_type=<class 'int'>, input_value='Fahrenheit 451'

If there is at least one unexpected error AggregateLoadError is replaced by standard ExceptionGroup. For the dumping process any exception is unexpected, so it always will be wrapped with ExceptionGroup

from dataclasses import dataclass
from datetime import datetime

from adaptix import Retort, loader
from adaptix.struct_trail import Attr, get_trail


@dataclass
class Book:
    title: str
    price: int
    created_at: datetime


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "created_at": "2023-10-07T16:25:19.303579",
}


def broken_title_loader(data):
    raise ArithmeticError("Some unexpected error")


retort = Retort(
    recipe=[
        loader("title", broken_title_loader),
    ],
)

try:
    retort.load(data, Book)
except Exception as e:
    assert isinstance(e, ExceptionGroup)
    assert len(e.exceptions) == 1
    assert isinstance(e.exceptions[0], ArithmeticError)
    assert list(get_trail(e.exceptions[0])) == ["title"]

book = Book(
    title="Fahrenheit 451",
    price=100,
    created_at=None,  # type: ignore[arg-type]
)

try:
    retort.dump(book)
except Exception as e:
    assert isinstance(e, ExceptionGroup)
    assert len(e.exceptions) == 1
    assert isinstance(e.exceptions[0], TypeError)
    assert list(get_trail(e.exceptions[0])) == [Attr("created_at")]

Trail of exception is stored at a special private attribute and could be accessed via get_trail.

As you can see, trail elements after dumping are wrapped in Attr. It is necessary because str or int instances mean that data can be accessed via [].

Extended usage#

This section continues the tutorial to illuminate some more complex topics.

Generic classes#

Generic classes are supported out of the box.

from dataclasses import dataclass
from typing import Generic, Optional, TypeVar

from adaptix import Retort

T = TypeVar("T")


@dataclass
class MinMax(Generic[T]):
    min: Optional[T] = None
    max: Optional[T] = None


retort = Retort()

data = {"min": 10, "max": 20}
min_max = retort.load(data, MinMax[int])
assert min_max == MinMax(min=10, max=20)
assert retort.dump(min_max, MinMax[int]) == data

If a generic class is not parametrized, Python specification requires to assume Any for each position. Adaptix acts slightly differently, it derives implicit parameters based on TypeVar properties.

TypeVar

Derived implicit parameter

T = TypeVar('T')

Any

B = TypeVar('B', bound=Book)

Book

C = TypeVar('C', str, bytes)

Union[str, bytes]

You should always pass concrete type to the second argument Retort.dump method. There is no way to determine the type parameter of an object at runtime due to type erasure. If you pass non-parametrized generic, retort will raise error.

Recursive data types#

These types could be loaded and dumped without additional configuration.

from dataclasses import dataclass
from typing import List

from adaptix import Retort


@dataclass
class ItemCategory:
    id: int
    name: str
    sub_categories: List["ItemCategory"]


retort = Retort()

data = {
    "id": 1,
    "name": "literature",
    "sub_categories": [
        {
            "id": 2,
            "name": "novel",
            "sub_categories": [],
        },
    ],
}
item_category = retort.load(data, ItemCategory)
assert item_category == ItemCategory(
    id=1,
    name="literature",
    sub_categories=[
        ItemCategory(
            id=2,
            name="novel",
            sub_categories=[],
        ),
    ],
)
assert retort.dump(item_category) == data

But it does not work with cyclic-referenced objects like

item_category.sub_categories.append(item_category)

Name mapping#

The name mapping mechanism allows precise control outer representation of a model.

It is configured entirely via name_mapping.

The first argument of this function is a predicate, which selects affected classes (see Predicate system for detail). If it is omitted, rules will be applied to all models.

Mutating field name#

There are several ways to change the name of a field for loading and dumping.

Field renaming#

Sometimes you have JSON with keys that leave much to be desired. For example, they might be invalid Python identifiers or just have unclear meanings. The simplest way to fix it is to use name_mapping.map to rename it.

from dataclasses import dataclass
from datetime import datetime, timezone

from adaptix import Retort, name_mapping


@dataclass
class Event:
    name: str
    timestamp: datetime


retort = Retort(
    recipe=[
        name_mapping(
            Event,
            map={
                "timestamp": "ts",
            },
        ),
    ],
)

data = {
    "name": "SystemStart",
    "ts": "2023-05-14T00:06:33+00:00",
}
event = retort.load(data, Event)
assert event == Event(
    name="SystemStart",
    timestamp=datetime(2023, 5, 14, 0, 6, 33, tzinfo=timezone.utc),
)
assert retort.dump(event) == data

The keys of map refers to the field name at model definition, and values contain a new field name.

Fields absent in map are not translated and used with their original names.

There are more complex and more powerful use cases of map, which will be described at Advanced mapping.

Name style#

Sometimes JSON keys are quite normal but do fit PEP8 recommendations of variable naming. You can rename each field individually, but library can automatically translate such names.

from dataclasses import dataclass

from adaptix import NameStyle, Retort, name_mapping


@dataclass
class Person:
    first_name: str
    last_name: str


retort = Retort(
    recipe=[
        name_mapping(
            Person,
            name_style=NameStyle.CAMEL,
        ),
    ],
)

data = {
    "firstName": "Richard",
    "lastName": "Stallman",
}
event = retort.load(data, Person)
assert event == Person(first_name="Richard", last_name="Stallman")
assert retort.dump(event) == data

See NameStyle for a list of all available target styles.

You cannot convert names that do not follow snake_case style. name_mapping.map takes precedence over name_mapping.name_style, so you can use it to rename fields that do not follow snake_case or override automatic style adjusting.

Stripping underscore#

Sometimes API uses reserved Python keywords therefore it can not be used as a field name. Usually, it is solved by adding a trailing underscore to the field name (e.g. from_ or import_). Retort trims trailing underscore automatically.

from dataclasses import dataclass

from adaptix import Retort


@dataclass
class Interval:
    from_: int
    to_: int


retort = Retort()

data = {
    "from": 10,
    "to": 20,
}
event = retort.load(data, Interval)
assert event == Interval(from_=10, to_=20)
assert retort.dump(event) == data

If this behavior is unwanted, you can disable this feature by setting trim_trailing_underscore=False

from dataclasses import dataclass

from adaptix import Retort, name_mapping


@dataclass
class Interval:
    from_: int
    to_: int


retort = Retort(
    recipe=[
        name_mapping(
            Interval,
            trim_trailing_underscore=False,
        ),
    ],
)

data = {
    "from_": 10,
    "to_": 20,
}
event = retort.load(data, Interval)
assert event == Interval(from_=10, to_=20)
assert retort.dump(event) == data

name_mapping.map is prioritized over name_mapping.trim_trailing_underscore.

Fields filtering#

You can select which fields will be loaded or dumped. Two parameters that can be used for these: name_mapping.skip and name_mapping.only

from dataclasses import dataclass

from adaptix import NoSuitableProvider, Retort, name_mapping


@dataclass
class User:
    id: int
    name: str
    password_hash: str


retort = Retort(
    recipe=[
        name_mapping(
            User,
            skip=["password_hash"],
        ),
    ],
)


user = User(
    id=52,
    name="Ken Thompson",
    password_hash="ZghOT0eRm4U9s",
)
data = {
    "id": 52,
    "name": "Ken Thompson",
}
assert retort.dump(user) == data

try:
    retort.get_loader(User)
except NoSuitableProvider:
    pass
Traceback of raised error
  + Exception Group Traceback (most recent call last):
  |   ...
  | adaptix.AggregateCannotProvide: Cannot create loader for model. Cannot fetch InputNameLayout (1 sub-exception)
  | Location: type=<class 'docs.examples.extended_usage.fields_filtering_skip.User'>
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   ...
    | adaptix.CannotProvide: Required fields ['password_hash'] are skipped
    | Location: type=<class 'docs.examples.extended_usage.fields_filtering_skip.User'>
    +------------------------------------

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  ...
adaptix.NoSuitableProvider: Cannot produce loader for type <class 'docs.examples.extended_usage.fields_filtering_skip.User'>

Excluding the required field makes it impossible to create a loader, but the dumper will work properly.

Same example but with using only
from dataclasses import dataclass

from adaptix import NoSuitableProvider, Retort, name_mapping


@dataclass
class User:
    id: int
    name: str
    password_hash: str


retort = Retort(
    recipe=[
        name_mapping(
            User,
            only=["id", "name"],
        ),
    ],
)


user = User(
    id=52,
    name="Ken Thompson",
    password_hash="ZghOT0eRm4U9s",
)
data = {
    "id": 52,
    "name": "Ken Thompson",
}
assert retort.dump(user) == data

try:
    retort.get_loader(User)
except NoSuitableProvider:
    pass
  + Exception Group Traceback (most recent call last):
  |   ...
  | adaptix.AggregateCannotProvide: Cannot create loader for model. Cannot fetch InputNameLayout (1 sub-exception)
  | Location: type=<class 'docs.examples.extended_usage.fields_filtering_only.User'>
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   ...
    | adaptix.CannotProvide: Required fields ['password_hash'] are skipped
    | Location: type=<class 'docs.examples.extended_usage.fields_filtering_only.User'>
    +------------------------------------

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  ...
adaptix.NoSuitableProvider: Cannot produce loader for type <class 'docs.examples.extended_usage.fields_filtering_only.User'>
Skipping optional field
from dataclasses import dataclass

from adaptix import Retort, name_mapping


@dataclass
class User:
    id: int
    name: str
    trust_rating: float = 0


retort = Retort(
    recipe=[
        name_mapping(
            User,
            skip=["trust_rating"],
        ),
    ],
)


data = {
    "id": 52,
    "name": "Ken Thompson",
}
data_with_trust_rating = {
    **data,
    "trust_rating": 100,
}
assert retort.load(data, User) == User(id=52, name="Ken Thompson")
assert retort.load(data_with_trust_rating, User) == User(id=52, name="Ken Thompson")
assert retort.dump(User(id=52, name="Ken Thompson", trust_rating=100)) == data

Both parameters take predicate or iterable of predicates, so you can use all features of Predicate system. For example, you can filter fields based on their type.

from dataclasses import dataclass

from adaptix import Retort, dumper, loader, name_mapping


class HiddenStr(str):
    def __repr__(self):
        return "'<hidden>'"


@dataclass
class User:
    id: int
    name: str
    password_hash: HiddenStr


retort = Retort(
    recipe=[
        loader(HiddenStr, HiddenStr),
        dumper(HiddenStr, str),
    ],
)
skipping_retort = retort.extend(
    recipe=[
        name_mapping(
            User,
            skip=HiddenStr,
        ),
    ],
)

user = User(
    id=52,
    name="Ken Thompson",
    password_hash=HiddenStr("ZghOT0eRm4U9s"),
)
data = {
    "id": 52,
    "name": "Ken Thompson",
}
data_with_password_hash = {
    **data,
    "password_hash": "ZghOT0eRm4U9s",
}
assert repr(user) == "User(id=52, name='Ken Thompson', password_hash='<hidden>')"
assert retort.dump(user) == data_with_password_hash
assert retort.load(data_with_password_hash, User) == user
assert skipping_retort.dump(user) == data

Omit default#

If you have defaults for some fields, it could be unnecessary to store them in dumped representation. You can omit them when serializing a name_mapping.omit_default parameter. Values that are equal to default, will be stripped from the resulting dict.

from dataclasses import dataclass, field
from typing import List, Optional

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    sub_title: Optional[str] = None
    authors: List[str] = field(default_factory=list)


retort = Retort(
    recipe=[
        name_mapping(
            Book,
            omit_default=True,
        ),
    ],
)

book = Book(title="Fahrenheit 451")
assert retort.dump(book) == {"title": "Fahrenheit 451"}

By default, omit_default is disabled, you can set it to True which will affect all fields. Also, you can pass any predicate or iterable of predicate to apply the rule only to selected fields.

from dataclasses import dataclass, field
from typing import List, Optional

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    sub_title: Optional[str] = None
    authors: List[str] = field(default_factory=list)


retort = Retort(
    recipe=[
        name_mapping(
            Book,
            omit_default="authors",
        ),
    ],
)

book = Book(title="Fahrenheit 451")
assert retort.dump(book) == {"title": "Fahrenheit 451", "sub_title": None}

Unknown fields processing#

Unknown fields are the keys of mapping that do not map to any known field.

By default, all extra data that is absent in the target structure are ignored. You can change this behavior via name_mapping.extra_in and name_mapping.extra_out parameters.

Field renaming does not affect on unknown fields, collected unknown fields will have original names.

On loading#

Parameter name_mapping.extra_in controls policy how extra data is saved.

ExtraSkip#

Default behaviour. All extra data is ignored.

from dataclasses import dataclass

from adaptix import Retort


@dataclass
class Book:
    title: str
    price: int


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
}

retort = Retort()

book = retort.load(data, Book)
assert book == Book(title="Fahrenheit 451", price=100)
ExtraForbid#

This policy raises load_error.ExtraFieldsError in case of any unknown field is found.

from dataclasses import dataclass

from adaptix import ExtraForbid, Retort, name_mapping
from adaptix.load_error import AggregateLoadError, ExtraFieldsLoadError


@dataclass
class Book:
    title: str
    price: int


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
}

retort = Retort(
    recipe=[
        name_mapping(Book, extra_in=ExtraForbid()),
    ],
)

try:
    retort.load(data, Book)
except AggregateLoadError as e:
    assert len(e.exceptions) == 1
    assert isinstance(e.exceptions[0], ExtraFieldsLoadError)
    assert set(e.exceptions[0].fields) == {"unknown1", "unknown2"}

Non-guaranteed behavior

Order of fields inside load_error.ExtraFieldsError is not guaranteed and can be unstable between runs.

ExtraKwargs#

Extra data are passed as additional keyword arguments.

from adaptix import ExtraKwargs, Retort, name_mapping


class Book:
    def __init__(self, title: str, price: int, **kwargs):
        self.title = title
        self.price = price
        self.kwargs = kwargs

    def __eq__(self, other):
        return (
            self.title == other.title
            and self.price == other.price
            and self.kwargs == other.kwargs
        )


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
}

retort = Retort(
    recipe=[
        name_mapping(Book, extra_in=ExtraKwargs()),
    ],
)

book = retort.load(data, Book)
assert book == Book(title="Fahrenheit 451", price=100, unknown1=1, unknown2=2)

This policy has significant flaws by design and, generally, should not be used.

All extra fields are passed as additional keywords arguments without any conversion, specified type of **kwargs is ignored.

If an unknown field collides with the original field name, TypeError will be raised, treated as an unexpected error.

from adaptix import ExtraKwargs, Retort, name_mapping


class Book:
    def __init__(self, title: str, price: int, **kwargs):
        self.title = title
        self.price = price
        self.kwargs = kwargs

    def __eq__(self, other):
        return (
            self.title == other.title
            and self.price == other.price
            and self.kwargs == other.kwargs
        )


data = {
    "name": "Fahrenheit 451",
    "price": 100,
    "title": "Celsius 232.778",
}

retort = Retort(
    recipe=[
        name_mapping(Book, map={"title": "name"}),
        name_mapping(Book, extra_in=ExtraKwargs()),
    ],
)

try:
    retort.load(data, Book)
except TypeError as e:
    assert str(e).endswith("__init__() got multiple values for argument 'title'")

The following strategy one has no such problems.

Field id#

You can pass the string with field name. Loader of corresponding field will receive mapping with unknown data.

from dataclasses import dataclass
from typing import Any, Mapping

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    price: int
    extra: Mapping[str, Any]


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
}

retort = Retort(
    recipe=[
        name_mapping(Book, extra_in="extra"),
    ],
)

book = retort.load(data, Book)
assert book == Book(
    title="Fahrenheit 451",
    price=100,
    extra={
        "unknown1": 1,
        "unknown2": 2,
    },
)

Also you can pass Iterable[str]. Each field loader will receive same mapping of unknown data.

Saturator function#

There is a way to use a custom mechanism of unknown field saving.

You can pass a callable taking created model and mapping of unknown data named ‘saturator’. Precise type hint is Callable[[T, Mapping[str, Any]], None]. This callable can mutate the model to inject unknown data as you want.

from dataclasses import dataclass
from typing import Any, Mapping

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    price: int


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
}


def attr_saturator(model: Book, extra_data: Mapping[str, Any]) -> None:
    for key, value in extra_data.items():
        setattr(model, key, value)


retort = Retort(
    recipe=[
        name_mapping(Book, extra_in=attr_saturator),
    ],
)

book = retort.load(data, Book)
assert book == Book(title="Fahrenheit 451", price=100)
assert book.unknown1 == 1  # type: ignore[attr-defined]
assert book.unknown2 == 2  # type: ignore[attr-defined]
On dumping#

Parameter name_mapping.extra_in controls policy how extra data is extracted.

ExtraSkip#

Default behaviour. All extra data is ignored.

from dataclasses import dataclass
from typing import Any, Mapping

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    price: int
    extra: Mapping[str, Any]


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
}

retort = Retort(
    recipe=[
        name_mapping(Book, extra_in="extra"),
    ],
)

book = retort.load(data, Book)
assert book == Book(
    title="Fahrenheit 451",
    price=100,
    extra={
        "unknown1": 1,
        "unknown2": 2,
    },
)
assert retort.dump(book) == {
    "title": "Fahrenheit 451",
    "price": 100,
    "extra": {  # `extra` is treated as common field
        "unknown1": 1,
        "unknown2": 2,
    },
}

You can skip extra from dumping. See Fields filtering for detail.

Field id#

You can pass the string with field name. Dumper of this field must return a mapping that will be merged with dict of dumped representation.

from dataclasses import dataclass
from typing import Any, Mapping

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    price: int
    extra: Mapping[str, Any]


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
}

retort = Retort(
    recipe=[
        name_mapping(Book, extra_in="extra", extra_out="extra"),
    ],
)

book = retort.load(data, Book)
assert book == Book(
    title="Fahrenheit 451",
    price=100,
    extra={
        "unknown1": 1,
        "unknown2": 2,
    },
)
assert retort.dump(book) == data

Non-guaranteed behavior

Output mapping keys have not collide with keys of dumped model. Otherwise the result is not guaranteed.

You can pass several field ids (Iterable[str]). The output mapping will be merged.

from dataclasses import dataclass
from typing import Any, Mapping

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    price: int
    extra1: Mapping[str, Any]
    extra2: Mapping[str, Any]


retort = Retort(
    recipe=[
        name_mapping(Book, extra_out=["extra1", "extra2"]),
    ],
)

book = Book(
    title="Fahrenheit 451",
    price=100,
    extra1={
        "unknown1": 1,
        "unknown2": 2,
    },
    extra2={
        "unknown3": 3,
        "unknown4": 4,
    },
)
assert retort.dump(book) == {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
    "unknown3": 3,
    "unknown4": 4,
}

Non-guaranteed behavior

Priority of output mapping is not guaranteed.

Extractor function#

There is way to take out extra data from via custom function called ‘extractor’. A callable must taking model and produce mapping of extra fields. Precise type hint is Callable[[T], Mapping[str, Any]].

import dataclasses
from dataclasses import dataclass
from typing import Any, Mapping

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    price: int


data = {
    "title": "Fahrenheit 451",
    "price": 100,
    "unknown1": 1,
    "unknown2": 2,
}


def attr_saturator(model: Book, extra_data: Mapping[str, Any]) -> None:
    for key, value in extra_data.items():
        setattr(model, key, value)


book_fields = {fld.name for fld in dataclasses.fields(Book)}


def attr_extractor(model: Book) -> Mapping[str, Any]:
    return {
        key: value
        for key, value in vars(model).items()
        if key not in book_fields
    }


retort = Retort(
    recipe=[
        name_mapping(Book, extra_in=attr_saturator, extra_out=attr_extractor),
    ],
)

book = retort.load(data, Book)
assert retort.dump(book) == data

Non-guaranteed behavior

Output mapping keys have not collide with keys of dumped model. Otherwise the result is not guaranteed.

Mapping to list#

Some APIs store structures as lists or arrays rather than dict for optimization purposes. For example, Binance uses it to represent historical market data.

There is name_mapping.as_list that converts the model to a list. Position at the list is determined by order of field definition.

from dataclasses import dataclass
from datetime import datetime, timezone

from adaptix import Retort, name_mapping


@dataclass
class Action:
    user_id: int
    kind: str
    timestamp: datetime


retort = Retort(
    recipe=[
        name_mapping(
            Action,
            as_list=True,
        ),
    ],
)


action = Action(
    user_id=23,
    kind="click",
    timestamp=datetime(2023, 5, 20, 15, 58, 23, 410366, tzinfo=timezone.utc),
)
data = [
    23,
    "click",
    "2023-05-20T15:58:23.410366+00:00",
]
assert retort.dump(action) == data
assert retort.load(data, Action) == action

You can override the order of fields using name_mapping.map parameter.

from dataclasses import dataclass
from datetime import datetime, timezone

from adaptix import Retort, name_mapping


@dataclass
class Action:
    user_id: int
    kind: str
    timestamp: datetime


retort = Retort(
    recipe=[
        name_mapping(
            Action,
            map={
                "user_id": 1,
                "kind": 0,
            },
            as_list=True,
        ),
    ],
)


action = Action(
    user_id=23,
    kind="click",
    timestamp=datetime(2023, 5, 20, 15, 58, 23, 410366, tzinfo=timezone.utc),
)
data = [
    "click",
    23,
    "2023-05-20T15:58:23.410366+00:00",
]
assert retort.dump(action) == data
assert retort.load(data, Action) == action

Also, you can map the model to list via name_mapping.map without using name_mapping.as_list, if you assign every field to their position on the list.

Mapping to list using only map
from dataclasses import dataclass
from datetime import datetime, timezone

from adaptix import Retort, name_mapping


@dataclass
class Action:
    user_id: int
    kind: str
    timestamp: datetime


retort = Retort(
    recipe=[
        name_mapping(
            Action,
            map={
                "user_id": 0,
                "kind": 1,
                "timestamp": 2,
            },
        ),
    ],
)


action = Action(
    user_id=23,
    kind="click",
    timestamp=datetime(2023, 5, 20, 15, 58, 23, 410366, tzinfo=timezone.utc),
)
data = [
    23,
    "click",
    "2023-05-20T15:58:23.410366+00:00",
]
assert retort.dump(action) == data
assert retort.load(data, Action) == action

Only ExtraSkip and ExtraForbid is could be used with mapping to list.

Structure flattening#

Too complex hierarchy of structures in API could be fixed via map parameter. Earlier, you used it to rename fields, but also you can use it to map a name to a nested value by specifying a path to it. Integers in the path are treated as list indices, strings - as dict keys.

from dataclasses import dataclass

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    price: int
    author: str


retort = Retort(
    recipe=[
        name_mapping(
            Book,
            map={
                "author": ["author", "name"],
                "title": ["book", "title"],
                "price": ["book", "price"],
            },
        ),
    ],
)

data = {
    "book": {
        "title": "Fahrenheit 451",
        "price": 100,
    },
    "author": {
        "name": "Ray Bradbury",
    },
}
book = retort.load(data, Book)
assert book == Book(
    title="Fahrenheit 451",
    price=100,
    author="Ray Bradbury",
)
assert retort.dump(book) == data

This snippet could be reduced.

  1. Ellipsis (...) inside path is replaced by original field name after automatic conversions.

  2. Dict could be replaced with a list of pairs. The first item of the pair is predicate (see Predicate system for detail), the second is the mapping result (path in this case).

from dataclasses import dataclass

from adaptix import Retort, name_mapping


@dataclass
class Book:
    title: str
    price: int
    author: str


retort = Retort(
    recipe=[
        name_mapping(
            Book,
            map=[
                ("author", (..., "name")),
                ("title|price", ("book", ...)),
            ],
        ),
    ],
)

data = {
    "book": {
        "title": "Fahrenheit 451",
        "price": 100,
    },
    "author": {
        "name": "Ray Bradbury",
    },
}
book = retort.load(data, Book)
assert book == Book(
    title="Fahrenheit 451",
    price=100,
    author="Ray Bradbury",
)
assert retort.dump(book) == data

Chaining (partial overriding)#

Result name_mapping is computed by merging all parameters of matched name_mapping.

from dataclasses import dataclass
from typing import Any, Dict

from adaptix import NameStyle, Retort, name_mapping


@dataclass
class Person:
    first_name: str
    last_name: str
    extra: Dict[str, Any]


@dataclass
class Book:
    title: str
    author: Person


retort = Retort(
    recipe=[
        name_mapping(Person, name_style=NameStyle.CAMEL),
        name_mapping("author", extra_in="extra", extra_out="extra"),
    ],
)

data = {
    "title": "Lord of Light",
    "author": {
        "firstName": "Roger",
        "lastName": "Zelazny",
        "unknown_field": 1995,
    },
}
book = retort.load(data, Book)
assert book == Book(
    title="Lord of Light",
    author=Person(
        first_name="Roger",
        last_name="Zelazny",
        extra={"unknown_field": 1995},
    ),
)
assert retort.dump(book) == data

The first provider override parameters of next providers.

from dataclasses import dataclass
from typing import Any, Dict

from adaptix import NameStyle, Retort, name_mapping


@dataclass
class Person:
    first_name: str
    last_name: str
    extra: Dict[str, Any]


@dataclass
class Book:
    title: str
    author: Person


retort = Retort(
    recipe=[
        name_mapping(Person, name_style=NameStyle.UPPER_SNAKE),
        name_mapping(Person, name_style=NameStyle.CAMEL),
        name_mapping("author", extra_in="extra", extra_out="extra"),
    ],
)

data = {
    "title": "Lord of Light",
    "author": {
        "FIRST_NAME": "Roger",
        "LAST_NAME": "Zelazny",
        "UNKNOWN_FIELD": 1995,
    },
}
book = retort.load(data, Book)
assert book == Book(
    title="Lord of Light",
    author=Person(
        first_name="Roger",
        last_name="Zelazny",
        extra={"UNKNOWN_FIELD": 1995},
    ),
)
assert retort.dump(book) == data

Private fields dumping#

By default, adaptix skips private fields (any field starting with underscore) at dumping.

from pydantic import BaseModel

from adaptix import Retort


class Book(BaseModel):
    title: str
    price: int
    _private: int

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self._private = 1


retort = Retort()
book = Book(title="Fahrenheit 451", price=100)
assert retort.dump(book) == {
    "title": "Fahrenheit 451",
    "price": 100,
}

You can include this fields by setting alias.

from pydantic import BaseModel

from adaptix import Retort, name_mapping


class Book(BaseModel):
    title: str
    price: int
    _private: int

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self._private = 1


retort = Retort(
    recipe=[
        name_mapping(Book, map={"_private": "private_field"}),
    ],
)
book = Book(title="Fahrenheit 451", price=100)
assert retort.dump(book) == {
    "title": "Fahrenheit 451",
    "price": 100,
    "private_field": 1,
}

Alias can be equal to field name (field id) and field will be included.

Including private field without renaming
from pydantic import BaseModel

from adaptix import Retort, name_mapping


class Book(BaseModel):
    title: str
    price: int
    _private: int

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self._private = 1


retort = Retort(
    recipe=[
        name_mapping(Book, map={"_private": "_private"}),
    ],
)
book = Book(title="Fahrenheit 451", price=100)
assert retort.dump(book) == {
    "title": "Fahrenheit 451",
    "price": 100,
    "_private": 1,
}

Advanced mapping#

Let’s figure it out with all features of name_mapping.map.

name_mapping.map can take data in two forms:

  1. collections.abc.Mapping with keys of field ids and values with mapping result

  2. Iterable of pairs (tuple of two elements) or providers or mapping described above. Provider interface for mapping currently is unstable and would not be described at this article. If you pass a tuple of two elements, the first item must be predicate (see Predicate system for detail), and the second item must be mapping result or function returning mapping result.

If you use mapping all keys must be field_id (e.g. valid python identifiers), so regexes like a|b is not allowed.

The mapping result is union of 5 types:

  1. String of external field name

  2. Integer indicating index inside output sequence

  3. Ellipsis (...) that will be replaced with the key after builtin conversions by name_mapping.trim_trailing_underscore, name_mapping.name_style and name_mapping.as_list.

  4. Iterable of string, integer or ellipsis, aka Structure flattening

  5. None that means skipped field. name_mapping.map is applied after name_mapping.only. So the field will be skipped despite the match by name_mapping.only.

Name mapping reuses concepts of recipe inside retort and also implements chain-of-responsibility design pattern.

Only the first element matched by its predicate is used to determine the mapping result.

The callable producing mapping result must take two parameters: the shape of the model and the field. Types of these parameters currently are internal. You can find an exact definition in the source code but it could change in the future.

Example of using advanced techniques:

import re
from dataclasses import dataclass
from typing import Iterable, List, Sequence

from adaptix import P, Retort, name_mapping


@dataclass
class Document:
    key: str

    redirects: List[str]
    edition_keys: List[str]
    lcc_list: List[str]


def create_plural_stripper(
    *,
    exclude: Sequence[str] = (),
    suffixes: Iterable[str] = ("s", "_list"),
):
    pattern = "^(.*)(" + "|".join(suffixes) + ")$"

    def plural_stripper(shape, fld):
        return re.sub(pattern, lambda m: m[1], fld.id)

    return (
        P[pattern] & ~P[tuple(exclude)],
        plural_stripper,
    )


retort = Retort(
    recipe=[
        name_mapping(
            Document,
            map=[
                {"key": "name"},
                create_plural_stripper(exclude=["redirects"]),
            ],
        ),
    ],
)
data = {
    "name": "The Lord of the Rings",
    "redirects": ["1234"],
    "edition_key": ["423", "4235"],
    "lcc": ["675", "345"],
}
document = retort.load(data, Document)
assert document == Document(
    key="The Lord of the Rings",
    redirects=["1234"],
    edition_keys=["423", "4235"],
    lcc_list=["675", "345"],
)
assert retort.dump(document) == data

Some XML APIs or APIs derived from XML do not use plural forms for repeated fields. So you need to strip the plural form at external representation.

The first item of name_mapping.map is dict that renames individual field. The second item is a tuple created by a function. The function constructs appropriate regex to match fields and trim plural suffixes.

The merging of map is different from other parameters. A new map does not replace others. The new iterable is concatenated to the previous.

Specific types behavior#

Builtin loaders and dumpers designed to work well with JSON data processing. If you are working with a different format, you may need to override the default behavior, see Retort recipe for details.

Mostly predefined loaders accept value only a single type; if it’s a string, it strings in a single format. You can disable the strict_coercion parameter of Retort to allow all conversions that the corresponding constructor can perform.

Scalar types#

Basic types#

Values of these types are loaded using their constructor. If strict_coercion is enabled, the loader will pass only values of appropriate types listed at the Allowed strict origins row.

Type

Allowed strict origins

Dumping to

int

int

no conversion

float

float, int

no conversion

str

str

no conversion

bool

bool

no conversion

Decimal

str, Decimal

str

Fraction

str, Fraction

str

complex

str, complex

str

Any and object#

Value is passed as is, without any conversion.

None#

Loader accepts only None, dumper produces no conversion.

bytes-like#

Exact list: bytes, bytearray, ByteString.

Value is represented as base64 encoded string.

BytesIO and IO[bytes]#

Value is represented as base64 encoded string.

re.Pattern#

The loader accepts a string that will be compiled into a regex pattern. Dumper extracts the original string from a compiled pattern.

Path-like#

Exact list: PurePath, Path, PurePosixPath, PosixPath, PureWindowsPath, WindowsPath, PathLike[str].

Loader takes any string accepted by the constructor, dumper serialize value via __fspath__ method.

PathLike[str] loader produces Path instance

IP addresses and networks#

Exact list: IPv4Address, IPv6Address, IPv4Network, IPv6Network, IPv4Interface, IPv6Interface.

Loader takes any string accepted by the constructor, dumper serialize value via __str__ method.

UUID#

Loader takes any hex string accepted by the constructor, dumper serialize value via __str__ method.

date, time and datetime#

Value is represented as an isoformat string.

timedelta#

Loader accepts instance of int, float or Decimal representing seconds, dumper serialize value via total_seconds method.

Flag subclasses#

Flag members by default are represented by their value. Note that flags with skipped bits and negative values are not supported, so it is highly recommended to define flag values via enum.auto() instead of manually specifying them. Besides, adaptix provides another way to process flags: by list using their names. See flag_by_member_names for details.

Other Enum subclasses#

Enum members are represented by their value without any conversion.

LiteralString#

Loader and dumper have same behaviour as builtin one’s of str type

Compound types#

NewType#

All NewType’s are treated as origin types.

For example, if you create MyNewModel = NewType('MyNewModel', MyModel), MyNewModel will share loader, dumper and name_mapping with MyModel. This also applies to user-defined providers.

You can override providers only for NewType if you pass MyNewModel directly as a predicate.

Metadata types#

The types such as Final, Annotated, ClassVar and InitVar are processed the same as wrapped types.

Literal#

Loader accepts only values listed in Literal. If strict_coercion is enabled, the loader will distinguish equal bool and int instances, otherwise, they will be considered as same values. Enum instances will be loaded via its loaders. Enum loaders have a higher priority over others, that is, they will be applied first.

If the input value could be interpreted as several Literal members, the result will be undefined.

Dumper will return value without any processing excluding Enum instances, they will be processed via the corresponding dumper.

Be careful when you use a 0, 1, False and True as Literal members. Due to type hint caching Literal[0, 1] sometimes returns Literal[False, True]. It was fixed only at Python 3.9.1.

Union#

Loader calls loader of each union case and returns a value of the first loader that does not raise LoadError. Therefore, for the correct operation of a union loader, there must be no value that would be accepted by several union case loaders.

from dataclasses import dataclass
from typing import Union

from adaptix import Retort


@dataclass
class Cat:
    name: str
    breed: str


@dataclass
class Dog:
    name: str
    breed: str


retort = Retort()
retort.load({"name": "Tardar Sauce", "breed": "mixed"}, Union[Cat, Dog])

The return value in this example is undefined, it can be either a Cat instance or a Dog instance. This problem could be solved if the model will contain a designator (tag) that can uniquely determine the type.

from dataclasses import dataclass
from typing import Literal, Union

from adaptix import Retort


@dataclass
class Cat:
    name: str
    breed: str

    kind: Literal["cat"] = "cat"


@dataclass
class Dog:
    name: str
    breed: str

    kind: Literal["dog"] = "dog"


retort = Retort()
data = {"name": "Tardar Sauce", "breed": "mixed", "kind": "cat"}
cat = retort.load(data, Union[Cat, Dog])
assert cat == Cat(name="Tardar Sauce", breed="mixed")
assert retort.dump(cat) == data

This example shows how to add a type designator to the model. Be careful, this example does not work if name_mapping.omit_default is applied to tag field.

Be careful if one model is a superset of another model. By default, all unknown fields are skipped, this does not allow distinct such models.

from dataclasses import dataclass
from typing import Union

from adaptix import Retort


@dataclass
class Vehicle:
    speed: float


@dataclass
class Bike(Vehicle):
    wheel_count: int


retort = Retort()
data = {"speed": 10, "wheel_count": 3}
assert retort.load(data, Bike) == Bike(speed=10, wheel_count=3)
assert retort.load(data, Vehicle) == Vehicle(speed=10)
retort.load(data, Union[Bike, Vehicle])  # result is undefined

This can be avoided by inserting a type designator like in the example above. Processing of unknown fields could be customized via name_mapping.extra_in.

Dumper finds appropriate dumper using object type. This means that it does not distinguish List[int] and List[str]. For objects of types that are not listed in the union, but which are a subclass of some union case, the base class dumper is used. If there are several parents, it will be the selected class that appears first in .mro() list.

Also, builtin dumper can work only with class type hints and Literal. For example, type hints like LiteralString | int can not be dumped.

Iterable subclasses#

If strict_coercion is enabled, the loader takes any iterable excluding str and Mapping. If strict_coercion is disabled, any iterable are accepted.

Dumper produces the same iterable with dumped elements.

If you require a dumper or loader for abstract type, a minimal suitable type will be used. For example, if you need a dumper for type Iterable[int], retort will use tuple. So if a field with Iterable[int] type will contain List[int], the list will be converted to a tuple while dumping.

Tuple of dynamic length like *tuple[int, ...] isn’t supported yet. This doesn’t applies for tuples like *tuple[int, str] (constant length tuples).

Dict and Mapping#

Loader accepts any other Mapping and makes dict instances. Dumper also constructs dict with converted keys and values.

DefaultDict#

Loader makes instances of defaultdict with the default_factory parameter set to None. To customize this behavior, there are factory default_dict that have default_dict.default_factory parameter that can be overridden.

Models#

Models are classes that have a predefined set of fields. By default, models are loading from dict, with keys equal field names, but this behavior could be precisely configured via name_mapping mechanism. Also, the model could be loaded from the list.

Dumper works similarly and produces dict (or list).

See Supported model kinds for exact list of supported model.

Tutorial#

Installation#

Just use pip to install the library

pip install adaptix==3.0.0b5

Integrations with 3-rd party libraries are turned on automatically, but you can install adaptix with extras to check that versions are compatible.

There are two variants of extras. The first one checks that the version is the same or newer than the last supported, the second (strict) additionally checks that the version same or older than the last tested version.

Extras

Versions bound

attrs

attrs >= 21.3.0

attrs-strict

attrs >= 21.3.0, <= 23.2.0

sqlalchemy

sqlalchemy >= 2.0.0

sqlalchemy-strict

sqlalchemy >= 2.0.0, <= 2.0.29

pydantic

pydantic >= 2.0.0

pydantic-strict

pydantic >= 2.0.0, <= 2.7.0

Extras are specified inside square brackets, separating by comma.

So, this is valid installation variants:

pip install adaptix[attrs-strict]==3.0.0b5
pip install adaptix[attrs, sqlalchemy-strict]==3.0.0b5

Introduction#

Building an easily maintainable application requires you to split the code into layers. Data between layers should be passed using special data structures. It requires creating many converter functions transforming one model into another.

Adaptix helps you avoid writing boilerplate code by generating conversion functions for you.

from dataclasses import dataclass

from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column

from adaptix.conversion import get_converter


class Base(DeclarativeBase):
    pass


class Book(Base):
    __tablename__ = "books"

    id: Mapped[int] = mapped_column(primary_key=True)
    title: Mapped[str]
    price: Mapped[int]


@dataclass
class BookDTO:
    id: int
    title: str
    price: int


convert_book_to_dto = get_converter(Book, BookDTO)

assert (
    convert_book_to_dto(Book(id=183, title="Fahrenheit 451", price=100))
    ==
    BookDTO(id=183, title="Fahrenheit 451", price=100)
)

The actual signature of convert_book_to_dto is automatically derived by any type checker and any IDE.

Adaptix can transform between any of the supported models, see Supported model kinds for exact list of models and known limitations.

How it works? Adaptix scans each field of the destination model and matches it with the field of the source model. By default, only fields with the same name are matched. You can override this behavior.

Also, it works for nested models.

from dataclasses import dataclass

from adaptix.conversion import get_converter


@dataclass
class Person:
    name: str


@dataclass
class Book:
    title: str
    price: int
    author: Person


@dataclass
class PersonDTO:
    name: str


@dataclass
class BookDTO:
    title: str
    price: int
    author: PersonDTO


convert_book_to_dto = get_converter(Book, BookDTO)

assert (
    convert_book_to_dto(
        Book(title="Fahrenheit 451", price=100, author=Person("Ray Bradbury")),
    )
    ==
    BookDTO(title="Fahrenheit 451", price=100, author=PersonDTO("Ray Bradbury"))
)

Furthermore, there is conversion.convert that can directly convert one model to another, but it is quite limited and can not configured, so it won’t be considered onwards.

Usage of conversion.convert
from dataclasses import dataclass

from adaptix.conversion import convert


@dataclass
class Person:
    name: str


@dataclass
class Book:
    title: str
    price: int
    author: Person


@dataclass
class PersonDTO:
    name: str


@dataclass
class BookDTO:
    title: str
    price: int
    author: PersonDTO


assert (
    convert(
        Book(title="Fahrenheit 451", price=100, author=Person("Ray Bradbury")),
        BookDTO,
    )
    ==
    BookDTO(title="Fahrenheit 451", price=100, author=PersonDTO("Ray Bradbury"))
)

Upcasting#

All source model additional fields not found in the destination model are simply ignored.

from dataclasses import dataclass
from datetime import date

from adaptix.conversion import get_converter


@dataclass
class Book:
    title: str
    price: int
    author: str
    release_date: date
    page_count: int
    isbn: str


@dataclass
class BookDTO:
    title: str
    price: int
    author: str


convert_book_to_dto = get_converter(Book, BookDTO)

assert (
    convert_book_to_dto(
        Book(
            title="Fahrenheit 451",
            price=100,
            author="Ray Bradbury",
            release_date=date(1953, 10, 19),
            page_count=158,
            isbn="978-0-7432-4722-1",
        ),
    )
    ==
    BookDTO(
        title="Fahrenheit 451",
        price=100,
        author="Ray Bradbury",
    )
)

Downcasting#

Sometimes you need to add extra data to the source model. For this, you can use a special decorator.

# mypy: disable-error-code="empty-body"
from dataclasses import dataclass

from adaptix.conversion import impl_converter


@dataclass
class Book:
    title: str
    price: int
    author: str


@dataclass
class BookDTO:
    title: str
    price: int
    author: str
    page_count: int


@impl_converter
def convert_book_to_dto(book: Book, page_count: int) -> BookDTO:
    ...


assert (
    convert_book_to_dto(
        book=Book(
            title="Fahrenheit 451",
            price=100,
            author="Ray Bradbury",
        ),
        page_count=158,
    )
    ==
    BookDTO(
        title="Fahrenheit 451",
        price=100,
        author="Ray Bradbury",
        page_count=158,
    )
)

conversion.impl_converter takes an empty function and generates its body by signature.

# mypy: disable-error-code="empty-body" on the top of the file is needed because mypy forbids functions without body. Also, you can set this option at mypy config or supress each error individually via # type: ignore[empty-body].

Fields linking#

If the names of the fields are different, then you have to link them manually.

from dataclasses import dataclass

from adaptix import P
from adaptix.conversion import get_converter, link


@dataclass
class Book:
    name: str
    price: int
    author: str  # same as BookDTO.writer


@dataclass
class BookDTO:
    name: str
    price: int
    writer: str  # same as Book.author


convert_book_to_dto = get_converter(
    src=Book,
    dst=BookDTO,
    recipe=[link(P[Book].author, P[BookDTO].writer)],
)

assert (
    convert_book_to_dto(Book(name="Fahrenheit 451", price=100, author="Ray Bradbury"))
    ==
    BookDTO(name="Fahrenheit 451", price=100, writer="Ray Bradbury")
)

The first parameter of conversion.link is the predicate describing the field of the source model, the second parameter is the pointing to the field of the destination model.

This notation means that the field author of class Book will be linked with the field writer of class BookDTO.

You can use simple strings instead of P construct, but it will match any field with the same name despite of owner class.

By default, additional parameters can replace fields only on the top-level model. If you want to pass this data to a nested model, you should use conversion.from_param predicate factory.

# mypy: disable-error-code="empty-body"
from dataclasses import dataclass

from adaptix import P
from adaptix.conversion import from_param, impl_converter, link


@dataclass
class Person:
    name: str


@dataclass
class Book:
    title: str
    author: Person


@dataclass
class PersonDTO:
    name: str
    rating: float


@dataclass
class BookDTO:
    title: str
    author: PersonDTO


@impl_converter(recipe=[link(from_param("author_rating"), P[PersonDTO].rating)])
def convert_book_to_dto(book: Book, author_rating: float) -> BookDTO:
    ...


assert (
    convert_book_to_dto(
        Book(title="Fahrenheit 451", author=Person("Ray Bradbury")),
        4.8,
    )
    ==
    BookDTO(title="Fahrenheit 451", author=PersonDTO("Ray Bradbury", 4.8))
)

If the field name differs from the parameter name, you also can use conversion.from_param to link them.

Linking algorithm#

The building of the converter is based on a need to construct the destination model.

For each field of the destination model, adaptix searches a corresponding field. Additional parameters are checked (from right to left) before the fields. So, your custom linking looks among the additional parameters too.

By default, fields are matched by exact name equivalence, parameters are matched only for top-level destination model fields.

After fields are matched adaptix tries to create a coercer that transforms data from the source field to the destination type.

Type coercion#

By default, there are no implicit coercions between scalar types.

However, there are cases where type casting involves passing the data as is and adaptix detects its:

  • source type and destination type are the same

  • destination type is Any

  • source type is a subclass of destination type (excluding generics)

  • source union is a subset of destination union (simple == check is using)

Also, some compound types can be coerced if corresponding inner types are coercible:

  • source and destination types are models (conversion like top-level models)

  • source and destination types are Optional

  • source and destination types are one of the builtin iterable

  • source and destination types are dict

You can define your own coercion rule.

from dataclasses import dataclass
from uuid import UUID

from adaptix.conversion import coercer, get_converter


@dataclass
class Book:
    id: UUID
    title: str
    author: str


@dataclass
class BookDTO:
    id: str
    title: str
    author: str


convert_book_to_dto = get_converter(
    src=Book,
    dst=BookDTO,
    recipe=[coercer(UUID, str, func=str)],
)

assert (
    convert_book_to_dto(
        Book(
            id=UUID("87000388-94e6-49a4-b51b-320e38577bd9"),
            title="Fahrenheit 451",
            author="Ray Bradbury",
        ),
    )
    ==
    BookDTO(
        id="87000388-94e6-49a4-b51b-320e38577bd9",
        title="Fahrenheit 451",
        author="Ray Bradbury",
    )
)

The first parameter of conversion.coercer is the predicate describing the field of the source model, the second parameter is the pointing to the field of the destination model, the third parameter is the function that casts source data to the destination type.

Usually, only field types are used as predicates here.

Also you can set coercer for specific linking via conversion.link.coercer parameter.

from dataclasses import dataclass
from decimal import Decimal

from adaptix import P
from adaptix.conversion import get_converter, link


@dataclass
class Book:
    name: str
    price: int  # same as BookDTO.cost
    author: str


@dataclass
class BookDTO:
    name: str
    cost: Decimal  # same as Book.price
    author: str


convert_book_to_dto = get_converter(
    src=Book,
    dst=BookDTO,
    recipe=[link(P[Book].price, P[BookDTO].cost, coercer=lambda x: Decimal(x) / 100)],
)

assert (
    convert_book_to_dto(Book(name="Fahrenheit 451", price=100, author="Ray Bradbury"))
    ==
    BookDTO(name="Fahrenheit 451", cost=Decimal("1"), author="Ray Bradbury")
)

This coercer will have higher priority than defined via conversion.coercer function.

Putting together#

Let’s explore complex example collecting all features together.

# mypy: disable-error-code="empty-body"
from dataclasses import dataclass
from datetime import date
from uuid import UUID

from adaptix import P
from adaptix.conversion import coercer, from_param, impl_converter, link


@dataclass
class Author:
    name: str
    surname: str
    birthday: date  # is converted to str


@dataclass
class Book:
    id: UUID  # is converted to str
    title: str
    author: Author  # is renamed to `writer`
    isbn: str  # this field is ignored


@dataclass
class AuthorDTO:
    name: str
    surname: str
    birthday: str


@dataclass
class BookDTO:
    id: str
    title: str
    writer: AuthorDTO
    page_count: int  # is taken from `pages_len` param
    rating: float  # is taken from param with the same name


@impl_converter(
    recipe=[
        link(from_param("pages_len"), P[BookDTO].page_count),
        link(P[Book].author, P[BookDTO].writer),
        coercer(UUID, str, func=str),
        coercer(P[Author].birthday, P[AuthorDTO].birthday, date.isoformat),
    ],
)
def convert_book_to_dto(book: Book, pages_len: int, rating: float) -> BookDTO:
    ...


assert (
    convert_book_to_dto(
        book=Book(
            id=UUID("87000388-94e6-49a4-b51b-320e38577bd9"),
            isbn="978-0-7432-4722-1",
            title="Fahrenheit 451",
            author=Author(name="Ray", surname="Bradbury", birthday=date(1920, 7, 22)),
        ),
        pages_len=158,
        rating=4.8,
    )
    ==
    BookDTO(
        id="87000388-94e6-49a4-b51b-320e38577bd9",
        title="Fahrenheit 451",
        writer=AuthorDTO(name="Ray", surname="Bradbury", birthday="1920-07-22"),
        page_count=158,
        rating=4.8,
    )
)

Integrations#

This article describes how adaptix is workings with other packages and systems.

Supported model kinds#

Models are classes that have a predefined set of fields. Adaptix process models in the same, consistent way.

Models that are supported out of the box:

Arbitrary types also are supported to be loaded by introspection of __init__ method, but it can not be dumped.

You do not need to do anything to enable support for models from a third-party library. Everything just works. But you can install adaptix with certain extras to ensure version compatibility.

Due to the way Python works with annotations, there is a bug, when field annotation of TypedDict is stringified or from __future__ import annotations is placed in file Required and NotRequired specifiers is ignored when required_keys and optional_keys is calculated. Adaptix takes this into account and processes it properly.

Known peculiarities and limitations#

dataclass#
  • Signature of custom __init__ method must be same as signature of generated by @dataclass, because there is no way to distinguish them.

__init__ introspection or using constructor#
  • Fields of unpacked typed dict (**kwargs: Unpack[YourTypedDict]) cannot collide with parameters of function.

sqlalchemy#
  • Only mapping to Table is supported, implementations for FromClause instances such as Subquery and Join are not provided.

  • dataclass and attrs mapped by sqlalchemy are not supported for introspection.

  • It does not support registering order of mapped fields by design, so you should use manual mapping to list instead automatic as_list=True.

  • Relationships with custom collection_class are not supported.

  • All input fields of foreign keys and relationships are considered as optional due to user can pass only relationship instance or only foreign key value.

pydantic#
  • Custom __init__ function must have only one parameter accepting arbitrary keyword arguments (like **kwargs or **data).

  • There are 3 category of fields: regular fields, computed fields (marked properties) and private attributes. Pydantic tracks order inside one category, but does not track between categories. Also, pydantic does not keep right order inside private attributes.

    Therefore, during the dumping of fields, regular fields will come first, followed by computed fields, and then private attributes. You can use use manual mapping to list instead automatic as_list=True to control the order.

  • Fields with constraints defined by parameters (like f1: int = Field(gt=1, ge=10)) are translated to Annotated with corresponding metadata. Metadata is generated by Pydantic and consists of objects from annotated_types package (like Annotated[int, Gt(gt=1), Ge(ge=10)]).

  • Parametrized generic pydantic models do not expose common type hints dunders that prevents appropriate type hints introspection. This leads to incorrect generics resolving in some tricky cases.

    Also, there are some bugs in generic resolving inside pydantic itself.

  • Pydantic does not support variadic generics.

  • pydantic.dataclasses is not supported.

  • pydantic.v1 is not supported.

Working with Pydantic#

By default, any pydantic model is loaded and dumped like any other model. For example, any aliases or config parameters defined inside the model are ignored. You can override this behavior to use a native pydantic validation/serialization mechanism.

from pydantic import BaseModel, Field

from adaptix import Retort
from adaptix.integrations.pydantic import native_pydantic


class Book(BaseModel):
    title: str = Field(alias="name")
    price: int


data = {
    "name": "Fahrenheit 451",
    "price": 100,
}

retort = Retort(
    recipe=[
        native_pydantic(Book),
    ],
)

book = retort.load(data, Book)
assert book == Book(name="Fahrenheit 451", price=100)
assert retort.dump(book) == data

Examples#

The source code repository contains various examples of library usage. The behavior of each example is illustrated via included tests.

Simple API processing#

Example of loading and dumping data for some JSON API. It shows how to achieve the desired result using minimal retort configuration.

Models represent simplified data of current and forecast weather request to OpenWeather

Source Code

SQLAlchemy JSON#

This example shows how to use Adaptix with SQLAlchemy to store JSON in a relational database.

Adaptix transparently converts JSON to desired dataclass and vice versa, your SQLAlchemy models contain already transmuted data.

Be careful persisting JSON in relational databases. There are a few appropriate use cases for this.

Source Code

API division#

An example illustrates how to implement different representations of a single model. The first representation is outer (outer_receipt_retort), it has a lot of validations and is used to load data from untrusted sources, e.g. API users. The second is inner (inner_receipt_retort) which contains less validation that speeds up loading data. It can be used to load and dump data for internal API to communicate between services.

Also, this example shows some other advanced concepts like adding support for custom types (PhoneNumber and Money) and provider chaining.

Another important concept behind this example is that there are no general retort objects. You can define a retort configured to work with a specific type and then include this retort to another responsible for the entire API endpoints.

For simplicity, inner_receipt_retort and outer_receipt_retort are contained in one module, but in a production code, most likely, they should be placed in their Interface Adapters layer

Source Code

adaptix#

adaptix package#

Subpackages#

adaptix.conversion package#
Module contents#
adaptix.conversion.convert(
src_obj: Any,
dst: Type[DstT],
*,
recipe: Iterable[Provider] = (),
) DstT#

Function transforming a source object to destination.

Parameters:
  • src_obj – A type of converter input data.

  • dst – A type of converter output data.

  • recipe – An extra recipe adding to retort.

Returns:

Instance of destination

adaptix.conversion.get_converter(
src: Any,
dst: Any,
*,
recipe: Iterable[Provider] = (),
name: str | None = None,
)#

Factory producing basic converter.

Parameters:
  • src – A type of converter input data.

  • dst – A type of converter output data.

  • recipe – An extra recipe adding to retort.

  • name – Name of generated function, if value is None, name will be derived.

Returns:

Desired converter function

adaptix.conversion.impl_converter(
stub_function: Callable | None = None,
*,
recipe: Iterable[Provider] = (),
)#

Decorator producing converter with signature of stub function.

Parameters:
  • stub_function – A function that signature is used to generate converter.

  • recipe – An extra recipe adding to retort.

Returns:

Desired converter function

Basic provider to define custom linking between fields.

Parameters:
  • src – Predicate specifying source point of linking. See Predicate system for details.

  • dst – Predicate specifying destination point of linking. See Predicate system for details.

  • coercer – Function transforming source value to target. It has higher priority than generic coercers defined by coercer.

Returns:

Desired provider

Provider that passes a constant value or the result of a function call to a field.

Parameters:
  • dst – Predicate specifying destination point of linking. See Predicate system for details.

  • value – A value is passed to the field.

  • factory – A callable producing value passed to the field.

Returns:

Desired provider

adaptix.conversion.coercer(
src: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
dst: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
func: Callable[[Any], Any],
) Provider#

Basic provider to define custom coercer.

Parameters:
  • src – Predicate specifying source point of linking. See Predicate system for details.

  • dst – Predicate specifying destination point of linking. See Predicate system for details.

  • func – The function is used to transform input data to a destination type.

Returns:

Desired provider

adaptix.conversion.allow_unlinked_optional(
*preds: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
) Provider#

Sets policy to permit optional fields that does not linked to any source field.

Parameters:

preds – Predicate specifying target of policy. Each predicate is merged via | operator. See Predicate system for details.

Returns:

Desired provider.

adaptix.conversion.forbid_unlinked_optional(
*preds: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
) Provider#

Sets policy to prohibit optional fields that does not linked to any source field.

Parameters:

preds – Predicate specifying target of policy. Each predicate is merged via | operator. See Predicate system for details.

Returns:

Desired provider.

adaptix.conversion.from_param(
param_name: str,
) LocStackChecker#

The special predicate form matching only top-level parameters by name

class adaptix.conversion.AdornedConversionRetort(
recipe: Iterable[Provider] = (),
)#

Bases: OperatingRetort

extend(
*,
recipe: Iterable[Provider],
) AR#
get_converter(
src: Type[SrcT],
dst: Type[DstT],
*,
recipe: Iterable[Provider] = (),
) Callable[[SrcT], DstT]#
get_converter(
src: Any,
dst: Any,
*,
name: str | None = None,
recipe: Iterable[Provider] = (),
) Callable[[Any], Any]

Method producing basic converter.

Parameters:
  • src – A type of converter input data.

  • dst – A type of converter output data.

  • recipe – An extra recipe adding to retort.

  • name – Name of generated function, if value is None, name will be derived.

Returns:

Desired converter function

recipe: ClassVar[Iterable[Provider]]#
impl_converter(
func_stub: CallableT,
/,
) CallableT#
impl_converter(
*,
recipe: Iterable[Provider] = (),
) Callable[[CallableT], CallableT]

Decorator producing converter with signature of stub function.

Parameters:
  • stub_function – A function that signature is used to generate converter.

  • recipe – An extra recipe adding to retort.

Returns:

Desired converter function

convert(
src_obj: Any,
dst: Type[DstT],
*,
recipe: Iterable[Provider] = (),
) DstT#

Method transforming a source object to destination.

Parameters:
  • src_obj – A type of converter input data.

  • dst – A type of converter output data.

  • recipe – An extra recipe adding to retort.

Returns:

Instance of destination

class adaptix.conversion.FilledConversionRetort(
recipe: Iterable[Provider] = (),
)#

Bases: OperatingRetort

recipe: ClassVar[Iterable[Provider]]#
class adaptix.conversion.ConversionRetort(
recipe: Iterable[Provider] = (),
)#

Bases: FilledConversionRetort, AdornedConversionRetort

recipe: ClassVar[Iterable[Provider]]#
adaptix.integrations package#
Subpackages#
adaptix.integrations.pydantic package#
Module contents#
adaptix.integrations.pydantic.native_pydantic(
*preds: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
strict: bool | None | Omitted = Omitted(),
from_attributes: bool | None | Omitted = Omitted(),
mode: Literal['json', 'python'] | str | Omitted = Omitted(),
include: IncEx | Omitted = Omitted(),
exclude: IncEx | Omitted = Omitted(),
by_alias: bool | Omitted = Omitted(),
exclude_unset: bool | Omitted = Omitted(),
exclude_defaults: bool | Omitted = Omitted(),
exclude_none: bool | Omitted = Omitted(),
round_trip: bool | Omitted = Omitted(),
warnings: bool | Literal['none', 'warn', 'error'] | Omitted = Omitted(),
fallback: Callable[[Any], Any] | Omitted = Omitted(),
serialize_as_any: bool | Omitted = Omitted(),
context: Dict[str, Any] | None | Omitted = Omitted(),
config: ConfigDict | None = None,
) Provider#

Provider that represents value via pydantic. You can use this function to validate or serialize pydantic models via pydantic itself. Provider constructs TypeAdapter for a type to load and dump data.

Parameters:
  • preds – Predicates specifying where the provider should be used. The provider will be applied if any predicates meet the conditions, if no predicates are passed, the provider will be used for all Enums. See Predicate system for details.

  • strict – Parameter passed directly to .validate_python() method

  • from_attributes – Parameter passed directly to .validate_python() method

  • mode – Parameter passed directly to .to_python() method

  • include – Parameter passed directly to .to_python() method

  • exclude – Parameter passed directly to .to_python() method

  • by_alias – Parameter passed directly to .to_python() method

  • exclude_unset – Parameter passed directly to .to_python() method

  • exclude_defaults – Parameter passed directly to .to_python() method

  • exclude_none – Parameter passed directly to .to_python() method

  • round_trip – Parameter passed directly to .to_python() method

  • warnings – Parameter passed directly to .to_python() method

  • fallback – Parameter passed directly to .to_python() method

  • serialize_as_any – Parameter passed directly to .to_python() method

  • context – Parameter passed directly to .validate_python() and .to_python() methods

  • config – Parameter passed directly to config parameter of TypeAdapter constructor

Returns:

Desired provider

Module contents#
adaptix.provider package#
Module contents#
exception adaptix.provider.CannotProvide(
message: str = '',
*,
is_terminal: bool = False,
is_demonstrative: bool = False,
)#

Bases: Exception

exception adaptix.provider.AggregateCannotProvide(
message: str,
exceptions: Sequence[CannotProvide],
*,
is_terminal: bool = False,
is_demonstrative: bool = False,
)#

Bases: ExceptionGroup[CannotProvide], CannotProvide

derive(
excs: Sequence[CannotProvide],
) AggregateCannotProvide#
derive_upcasting(
excs: Sequence[CannotProvide],
) CannotProvide#

Same as method derive but allow passing an empty sequence

classmethod make(
message: str,
exceptions: Sequence[CannotProvide],
*,
is_terminal: bool = False,
is_demonstrative: bool = False,
) CannotProvide#
class adaptix.provider.Mediator#

Bases: ABC, Generic[V]

Mediator is an object that gives provider access to other providers and that stores the state of the current search.

Mediator is a proxy to providers of retort.

abstract provide(
request: Request[T],
) T#

Get response of sent request.

Parameters:

request – A request instance

Returns:

Result of the request processing

Raises:

CannotProvide – A provider able to process the request does not be found

abstract provide_from_next() V#

Forward current request to providers that placed after current provider at the recipe.

final delegating_provide(
request: Request[T],
error_describer: Callable[[CannotProvide], str] | None = None,
) T#
final mandatory_provide(
request: Request[T],
error_describer: Callable[[CannotProvide], str] | None = None,
) T#
final mandatory_provide_by_iterable(
requests: Iterable[Request[T]],
error_describer: Callable[[], str] | None = None,
) Iterable[T]#
class adaptix.provider.Provider#

Bases: ABC

An object that can process Request instances

abstract apply_provider(
mediator: Mediator[T],
request: Request[T],
) T#

Handle request instance and return a value of type required by request. Behavior must be the same during the provider object lifetime

Raises:

CannotProvide – provider cannot process passed request

class adaptix.provider.Request#

Bases: Generic[T]

An object that contains data to be processed by Provider.

Generic argument indicates which object should be returned after request processing.

Request must always be a hashable object

class adaptix.provider.Chain(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)#

Bases: Enum

FIRST = 'FIRST'#
LAST = 'LAST'#
class adaptix.provider.LocStackPattern(
stack: Tuple[LocStackChecker, ...],
)#

Bases: object

property ANY: AnyLocStackChecker#
generic_arg(
pos: int,
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
) Pat#
build_loc_stack_checker() LocStackChecker#
adaptix.provider.create_loc_stack_checker(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
) LocStackChecker#

Submodules#

adaptix.load_error module#
exception adaptix.load_error.LoadError#

Bases: Exception

The base class for the exceptions that are raised when the loader gets invalid input data

exception adaptix.load_error.LoadExceptionGroup(
message: str,
exceptions: Tuple[LoadError, ...],
)#

Bases: ExceptionGroup[LoadError], LoadError

The base class integrating ExceptionGroup into the LoadError hierarchy

exception adaptix.load_error.AggregateLoadError(
message: str,
exceptions: Tuple[LoadError, ...],
)#

Bases: LoadExceptionGroup

The class collecting distinct load errors

exception adaptix.load_error.UnionLoadError(message: str, exceptions: Tuple[adaptix.load_error.LoadError, ...])#

Bases: LoadExceptionGroup

exception adaptix.load_error.MsgLoadError(msg: str | None, input_value: Any)#

Bases: LoadError

msg: str | None#
input_value: Any#
exception adaptix.load_error.ExtraFieldsLoadError(fields: Iterable[str], input_value: Any)#

Bases: LoadError

fields: Iterable[str]#
input_value: Any#
exception adaptix.load_error.ExtraItemsLoadError(expected_len: int, input_value: Any)#

Bases: LoadError

expected_len: int#
input_value: Any#
exception adaptix.load_error.NoRequiredFieldsLoadError(fields: Iterable[str], input_value: Any)#

Bases: LoadError

fields: Iterable[str]#
input_value: Any#
exception adaptix.load_error.NoRequiredItemsLoadError(expected_len: int, input_value: Any)#

Bases: LoadError

expected_len: int#
input_value: Any#
exception adaptix.load_error.TypeLoadError(expected_type: Any, input_value: Any)#

Bases: LoadError

expected_type: Any#
input_value: Any#
exception adaptix.load_error.ExcludedTypeLoadError(expected_type: Any, input_value: Any, excluded_type: Any)#

Bases: TypeLoadError

expected_type: Any#
excluded_type: Any#
input_value: Any#
exception adaptix.load_error.ValueLoadError(msg: str | None, input_value: Any)#

Bases: MsgLoadError

exception adaptix.load_error.ValidationLoadError(msg: str | None, input_value: Any)#

Bases: MsgLoadError

exception adaptix.load_error.BadVariantLoadError(allowed_values: Iterable[Any], input_value: Any)#

Bases: LoadError

allowed_values: Iterable[Any]#
input_value: Any#
exception adaptix.load_error.FormatMismatchLoadError(format: str, input_value: Any)#

Bases: LoadError

format: str#
input_value: Any#
exception adaptix.load_error.DuplicatedValuesLoadError(input_value: Any)#

Bases: LoadError

input_value: Any#
exception adaptix.load_error.OutOfRangeLoadError(
min_value: int | float | NoneType,
max_value: int | float | NoneType,
input_value: Any,
)#

Bases: LoadError

min_value: int | float | None#
max_value: int | float | None#
input_value: Any#
exception adaptix.load_error.MultipleBadVariantLoadError(
allowed_values: Iterable[Any],
invalid_values: Iterable[Any],
input_value: Any,
)#

Bases: LoadError

allowed_values: Iterable[Any]#
invalid_values: Iterable[Any]#
input_value: Any#
adaptix.retort module#
class adaptix.retort.BaseRetort(recipe: Iterable[Provider] = ())#

Bases: Cloneable, ABC

recipe: ClassVar[Iterable[Provider]]#
exception adaptix.retort.NoSuitableProvider(message: str)#

Bases: Exception

class adaptix.retort.OperatingRetort(
recipe: Iterable[Provider] = (),
)#

Bases: BaseRetort, Provider, ABC

A retort that can operate as Retort but have no predefined providers and no high-level user interface

apply_provider(
mediator: Mediator,
request: Request[T],
) T#

Handle request instance and return a value of type required by request. Behavior must be the same during the provider object lifetime

Raises:

CannotProvide – provider cannot process passed request

recipe: ClassVar[Iterable[Provider]]#
adaptix.struct_trail module#
class adaptix.struct_trail.TrailElementMarker#

Bases: object

class adaptix.struct_trail.Attr(name: str)#

Bases: TrailElementMarker

name: str#
class adaptix.struct_trail.ItemKey(key: Any)#

Bases: TrailElementMarker

key: Any#
adaptix.struct_trail.append_trail(
obj: T,
trail_element: str | int | Any | TrailElementMarker,
) T#

Append a trail element to object. Trail stores in special attribute, if an object does not allow adding 3rd-party attributes, do nothing. Element inserting to start of the path (it is built in reverse order)

adaptix.struct_trail.extend_trail(
obj: T,
sub_trail: Reversible[str | int | Any | TrailElementMarker],
) T#

Extend a trail with a sub trail. Trail stores in special attribute, if an object does not allow adding 3rd-party attributes, do nothing. Sub path inserting to start (it is built in reverse order)

adaptix.struct_trail.get_trail(
obj: object,
) Sequence[str | int | Any | TrailElementMarker]#

Retrieve trail from an object. Trail stores in special private attribute that never be accessed directly

adaptix.struct_trail.render_trail_as_note(
exc: BaseExcT,
) BaseExcT#

Module contents#

adaptix.TypeHint#

alias of Any

class adaptix.DebugTrail(
value,
names=None,
*,
module=None,
qualname=None,
type=None,
start=1,
boundary=None,
)#

Bases: Enum

DISABLE = 'DISABLE'#
FIRST = 'FIRST'#
ALL = 'ALL'#
adaptix.loader(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
func: adaptix.Loader,
chain: Chain | None = None,
) Provider#

Basic provider to define custom loader.

Parameters:
  • pred – Predicate specifying where loader should be used. See Predicate system for details.

  • func – Function that acts as loader. It must take one positional argument of raw data and return the processed value.

  • chain

    Controls how the function will interact with the previous loader.

    When None is passed, the specified function will fully replace the previous loader.

    If a parameter is Chain.FIRST, the specified function will take raw data and its result will be passed to previous loader.

    If the parameter is Chain.LAST, the specified function gets result of the previous loader.

Returns:

Desired provider

adaptix.dumper(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
func: adaptix.Dumper,
chain: Chain | None = None,
) Provider#

Basic provider to define custom dumper.

Parameters:
  • pred – Predicate specifying where dumper should be used. See Predicate system for details.

  • func – Function that acts as dumper. It must take one positional argument of raw data and return the processed value.

  • chain

    Controls how the function will interact with the previous dumper.

    When None is passed, the specified function will fully replace the previous dumper.

    If a parameter is Chain.FIRST, the specified function will take raw data and its result will be passed to previous dumper.

    If the parameter is Chain.LAST, the specified function gets result of the previous dumper.

Returns:

Desired provider

adaptix.as_is_dumper(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
) Provider#

Provider that creates dumper which does nothing with input data.

Parameters:

pred – Predicate specifying where dumper should be used. See Predicate system for details.

Returns:

Desired provider

adaptix.as_is_loader(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
) Provider#

Provider that creates loader which does nothing with input data.

Parameters:

pred – Predicate specifying where loader should be used. See Predicate system for details.

Returns:

Desired provider

adaptix.constructor(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
func: Callable,
) Provider#
adaptix.with_property(
pred: Pred,
prop: NameOrProp,
tp: Omittable[TypeHint] = Omitted(),
/,
*,
default: Default = NoDefault(),
access_error: Catchable | None = None,
metadata: Mapping[Any, Any] = mappingproxy({}),
) Provider#
adaptix.validator(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
func: Callable[[Any], bool],
error: str | Callable[[Any], LoadError] | None = None,
chain: Chain = Chain.LAST,
) Provider#
adaptix.bound(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
provider: Provider,
) Provider#
adaptix.enum_by_exact_value(
*preds: Any | str | EnumType | LocStackPattern,
) Provider#

Provider that represents enum members to the outside world by their value without any processing.

Parameters:

preds – Predicates specifying where the provider should be used. The provider will be applied if any predicates meet the conditions, if no predicates are passed, the provider will be used for all Enums. See Predicate system for details.

Returns:

Desired provider

adaptix.enum_by_name(
*preds: Any | str | EnumType | LocStackPattern,
name_style: NameStyle | None = None,
map: Mapping[str | Enum, str] | None = None,
) Provider#

Provider that represents enum members to the outside world by their name.

Parameters:
  • preds – Predicates specifying where the provider should be used. The provider will be applied if any predicates meet the conditions, if no predicates are passed, the provider will be used for all Enums. See Predicate system for details.

  • name_style – Name style for representing members to the outside world. If it is set, the provider will automatically convert the names of enum members to the specified convention.

  • map – Mapping for representing members to the outside world. If it is set, the provider will use it to rename members individually; its keys can either be member names as strings or member instances.

Returns:

Desired provider

adaptix.enum_by_value(
first_pred: Any | str | EnumType | LocStackPattern,
/,
*preds: Any | str | EnumType | LocStackPattern,
tp: Any,
) Provider#

Provider that represents enum members to the outside world by their value by loader and dumper of specified type. The loader will call the loader of the tp and pass it to the enum constructor. The dumper will get value from eum member and pass it to the dumper of the tp.

Parameters:
  • first_pred – Predicate specifying where the provider should be used. See Predicate system for details.

  • preds – Additional predicates. The provider will be applied if any predicates meet the conditions.

  • tp – Type of enum members. This type must cover all enum members for the correct operation of loader and dumper

Returns:

Desired provider

adaptix.flag_by_exact_value(
*preds: Any | str | EnumType | LocStackPattern,
) Provider#

Provider that represents flag members to the outside world by their value without any processing. It does not support flags with skipped bits and negative values (it is recommended to use enum.auto() to define flag values instead of manually specifying them).

Parameters:

preds – Predicates specifying where the provider should be used. The provider will be applied if any predicates meet the conditions, if no predicates are passed, the provider will be used for all Flags. See Predicate system for details.

Returns:

Desired provider

adaptix.flag_by_member_names(
*preds: Any | str | EnumType | LocStackPattern,
allow_single_value: bool = False,
allow_duplicates: bool = True,
allow_compound: bool = True,
name_style: NameStyle | None = None,
map: Mapping[str | Enum, str] | None = None,
) Provider#

Provider that represents flag members to the outside world by list of their names.

Loader takes a flag members name list and returns united flag member (given members combined by operator |, namely bitwise or).

Dumper takes a flag member and returns a list of names of flag members, included in the given flag member.

Parameters:
  • preds – Predicates specifying where the provider should be used. The provider will be applied if any predicates meet the conditions, if no predicates are passed, the provider will be used for all Flags. See Predicate system for details.

  • allow_single_value – Allows calling the loader with a single value. If this is allowed, singlular values are treated as one element list.

  • allow_duplicates – Allows calling the loader with a list containing non-unique elements. Unless this is allowed, loader will raise DuplicatedValuesLoadError in that case.

  • allow_compound – Allows the loader to accept names of compound members (e.g. WHITE = RED | GREEN | BLUE) and the dumper to return names of compound members. If this is allowed, dumper will use compound members names to serialize value.

  • name_style – Name style for representing members to the outside world. If it is set, the provider will automatically convert the names of all flag members to the specified convention.

  • map – Mapping for representing members to the outside world. If it is set, the provider will use it to rename members individually; its keys can either be member names as strings or member instances.

Returns:

Desired provider

adaptix.name_mapping(
pred: Omittable[Pred] = Omitted(),
*,
skip: Omittable[Iterable[Pred] | Pred] = Omitted(),
only: Omittable[Iterable[Pred] | Pred] = Omitted(),
map: Omittable[NameMap] = Omitted(),
as_list: Omittable[bool] = Omitted(),
trim_trailing_underscore: Omittable[bool] = Omitted(),
name_style: Omittable[NameStyle | None] = Omitted(),
omit_default: Omittable[Iterable[Pred] | Pred | bool] = Omitted(),
extra_in: Omittable[ExtraIn] = Omitted(),
extra_out: Omittable[ExtraOut] = Omitted(),
chain: Chain | None = Chain.FIRST,
) Provider#

A name mapping decides which fields will be presented to the outside world and how they will look.

The mapping process consists of two stages: 1. Determining which fields are presented 2. Mutating names of presented fields

skip parameter has higher priority than only.

Mutating parameters works in that way: Mapper tries to use the value from the map. If the field is not presented in the map, trim trailing underscore and convert name style.

The field must follow snake_case to could be converted.

Parameters:
adaptix.default_dict(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
default_factory: Callable,
) Provider#

DefaultDict provider with overriden default_factory parameter

Parameters:
  • pred – Predicate specifying where the provider should be used. See Predicate system for details.

  • default_factory – default_factory parameter of the defaultdict instance to be created by the loader

class adaptix.AdornedRetort(
*,
recipe: Iterable[Provider] = (),
strict_coercion: bool = True,
debug_trail: DebugTrail = DebugTrail.ALL,
)#

Bases: OperatingRetort

A retort implementing high-level user interface

replace(
*,
strict_coercion: bool | None = None,
debug_trail: DebugTrail | None = None,
) AR#
extend(
*,
recipe: Iterable[Provider],
) AR#
get_loader(
tp: Type[T],
) Callable[[Any], T]#
get_dumper(
tp: Type[T],
) Callable[[T], Any]#
load(
data: Any,
tp: Type[T],
/,
) T#
load(data: Any, tp: Any, /) Any
recipe: ClassVar[Iterable[Provider]]#
dump(
data: T,
tp: Type[T],
/,
) Any#
dump(data: Any, tp: Any | None = None, /) Any
class adaptix.FilledRetort(
recipe: Iterable[Provider] = (),
)#

Bases: OperatingRetort, ABC

A retort contains builtin providers

recipe: ClassVar[Iterable[Provider]]#
class adaptix.Retort(
*,
recipe: Iterable[Provider] = (),
strict_coercion: bool = True,
debug_trail: DebugTrail = DebugTrail.ALL,
)#

Bases: FilledRetort, AdornedRetort

recipe: ClassVar[Iterable[Provider]]#
exception adaptix.TypedDictAt38Warning#

Bases: UserWarning

Runtime introspection of TypedDict at python3.8 does not support inheritance. Please update python or consider limitations suppressing this warning

class adaptix.Omitted#

Bases: object

exception adaptix.CannotProvide(
message: str = '',
*,
is_terminal: bool = False,
is_demonstrative: bool = False,
)#

Bases: Exception

exception adaptix.AggregateCannotProvide(
message: str,
exceptions: Sequence[CannotProvide],
*,
is_terminal: bool = False,
is_demonstrative: bool = False,
)#

Bases: ExceptionGroup[CannotProvide], CannotProvide

derive(
excs: Sequence[CannotProvide],
) AggregateCannotProvide#
derive_upcasting(
excs: Sequence[CannotProvide],
) CannotProvide#

Same as method derive but allow passing an empty sequence

classmethod make(
message: str,
exceptions: Sequence[CannotProvide],
*,
is_terminal: bool = False,
is_demonstrative: bool = False,
) CannotProvide#
class adaptix.Chain(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)#

Bases: Enum

FIRST = 'FIRST'#
LAST = 'LAST'#
class adaptix.ExtraCollect#

Bases: object

Collect extra data and pass it to object

class adaptix.ExtraForbid#

Bases: object

Raise error if extra data would be met

class adaptix.ExtraKwargs#

Bases: object

class adaptix.ExtraSkip#

Bases: object

Ignore any extra data

class adaptix.Mediator#

Bases: ABC, Generic[V]

Mediator is an object that gives provider access to other providers and that stores the state of the current search.

Mediator is a proxy to providers of retort.

abstract provide(
request: Request[T],
) T#

Get response of sent request.

Parameters:

request – A request instance

Returns:

Result of the request processing

Raises:

CannotProvide – A provider able to process the request does not be found

abstract provide_from_next() V#

Forward current request to providers that placed after current provider at the recipe.

final delegating_provide(
request: Request[T],
error_describer: Callable[[CannotProvide], str] | None = None,
) T#
final mandatory_provide(
request: Request[T],
error_describer: Callable[[CannotProvide], str] | None = None,
) T#
final mandatory_provide_by_iterable(
requests: Iterable[Request[T]],
error_describer: Callable[[], str] | None = None,
) Iterable[T]#
class adaptix.NameStyle(
value,
names=None,
*,
module=None,
qualname=None,
type=None,
start=1,
boundary=None,
)#

Bases: Enum

An enumeration of different naming conventions

LOWER_SNAKE = 'lower_snake'#
CAMEL_SNAKE = 'camel_Snake'#
PASCAL_SNAKE = 'Pascal_Snake'#
UPPER_SNAKE = 'UPPER_SNAKE'#
LOWER_KEBAB = 'lower-kebab'#
CAMEL_KEBAB = 'camel-Kebab'#
PASCAL_KEBAB = 'Pascal-Kebab'#
UPPER_KEBAB = 'UPPER-KEBAB'#
LOWER = 'lowercase'#
CAMEL = 'camelCase'#
PASCAL = 'PascalCase'#
UPPER = 'UPPERCASE'#
LOWER_DOT = 'lower.dot'#
CAMEL_DOT = 'camel.Dot'#
PASCAL_DOT = 'Pascal.Dot'#
UPPER_DOT = 'UPPER.DOT'#
class adaptix.LocStackPattern(
stack: Tuple[LocStackChecker, ...],
)#

Bases: object

property ANY: AnyLocStackChecker#
generic_arg(
pos: int,
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
) Pat#
build_loc_stack_checker() LocStackChecker#
adaptix.create_loc_stack_checker(
pred: str | Pattern | type | Any | LocStackChecker | LocStackPattern,
) LocStackChecker#
class adaptix.Provider#

Bases: ABC

An object that can process Request instances

abstract apply_provider(
mediator: Mediator[T],
request: Request[T],
) T#

Handle request instance and return a value of type required by request. Behavior must be the same during the provider object lifetime

Raises:

CannotProvide – provider cannot process passed request

exception adaptix.NoSuitableProvider(message: str)#

Bases: Exception

class adaptix.Request#

Bases: Generic[T]

An object that contains data to be processed by Provider.

Generic argument indicates which object should be returned after request processing.

Request must always be a hashable object

adaptix.load(data: Any, tp: Any, /)#
adaptix.dump(data: Any, tp: Any | None = None, /) Any#

Changelog#

Versions follow Semantic Versioning (<major>.<minor>.<patch>), but with minor syntax differences to satisfy python package version specifiers.

Until a stable version is released (end of beta), new versions may contain backward-incompatible changes, but we will strive to deprecate features first instead of immediately removal. After that, breaking changes will only be introduced in major versions.

Non-guaranteed behavior

Some aspects is behavior are not guaranteed and could be changed at any release without any mention in the changelog (or even vary in different environments or different runs).

Such details are highlighted in the documentation via this admonition.


3.0.0b5 – 2024-04-20#

Features#

  • Add support for Pydantic models!

    Now you can work with pydantic models like any other: construct from dict, serialize to dict, convert to any other model, and convert it to any other model.

    Also, you can use integrations.pydantic.native_pydantic to delegate loading and dumping to pydantic itself.

  • Add support for dumping Literal inside Union. #237

  • Add support for BytesIO and IO[bytes]. #270

  • Error messages are more obvious.

Breaking Changes#

  • Forbid use of constructs like P[SomeClass].ANY because it is misleading (you have to use P.ANY directly).

  • Private fields (any field starting with underscore) are skipped at dumping. See Private fields dumping for details.


3.0.0b4 – 2024-03-30#

Features#

  • Add coercer for builtin iterables and dict.

  • Models can be automatically converted inside compound types like Optional, list, dict etc.

  • Add conversion.from_param predicate factory to match only parameters

  • An error of loader, dumper, and converter generation contains a much more readable location.

    For example:

    • Linking: `Book.author_ids: list[int] -> BookDTO.author_ids: list[str]`

    • Location: `Stub.f3: memoryview`

Breaking Changes#

  • Now, parameters are automatically linked only to top-level model fields. For manual linking, you can use the new adaptix.conversion.from_param predicate factory.

Bug Fixes#

  • Fix fail to import adaptix package on python 3.8-3.10 when -OO is used.

  • Fix unexpected error on creating coercer between fields with Optional type.

  • Fix unexpected error with type vars getting from UnionType.


3.0.0b3 – 2024-03-08#

Features#

  • conversion.link accepts coercer parameter. #256

  • Add conversion.link_constant to link constant values and constant factories. #258

  • Add coercer for case when source union is subset of destination union (simple == check is using). #242

  • No coercer error now contains type information. #252

  • Add coercer for Optional[S] -> Optional[D] if S is coercible to D. #254

Bug Fixes#

  • Fix SyntaxError with lambda in coercer. #243

  • Model dumping now trying to save the original order of fields inside the dict. #247

  • Fix introspection of sqlalchemy models with column_property (all ColumnElement is ignored excepting Column itself). #250


3.0.0b2 – 2024-02-16#

Features#

  • New major feature is out! Added support for model conversion! Now, you can generate boilerplate converter function by adaptix. See conversion tutorial for details.

  • Basic support for sqlalchemy models are added!

  • Added enum support inside Literal. #178

  • Added flags support.

    Now adaptix has two different ways to process flags: flag_by_exact_value (by default) and flag_by_member_names. #197

  • Added defaultdict support. #216

  • Added support of mapping for enum_by_name provider. #223

  • Created the correct path (fixing python bug) for processing Required and NotRequired with stringified annotations or from __future__ import annotations. #227

Breaking Changes#

  • Due to refactoring of predicate system required for new features:

    1. create_request_checker was renamed to create_loc_stack_checker

    2. RequestPattern (class of P) was renamed to LocStackPattern

    3. method RequestPattern.build_request_checker() was renamed to LocStackPattern.build_loc_stack_checker()

Deprecations#

  • Standardize names inside adaptix.load_error. Import of old names will emit DeprecationWarning.

    Old name

    New name

    MsgError

    MsgLoadError

    ExtraFieldsError

    ExtraFieldsLoadError

    ExtraItemsError

    ExtraItemsLoadError

    NoRequiredFieldsError

    NoRequiredFieldsLoadError

    NoRequiredItemsError

    NoRequiredItemsLoadError

    ValidationError

    ValidationLoadError

    BadVariantError

    BadVariantLoadError

    DatetimeFormatMismatch

    FormatMismatchLoadError

Bug Fixes#

  • Fixed parameter shuffling on skipping optional field. #229


3.0.0b1 – 2023-12-16#

Start of changelog.

Contributing#

How to setup the repository#

Warning

All internal tools and scripts are designed only to work on Linux. You have to use WSL to develop the project on Windows.

  1. Install Just

    Just is a command runner that is used here instead of make.

  2. Install all needed python interpreters

    • CPython 3.8

    • CPython 3.9

    • CPython 3.10

    • CPython 3.11

    • CPython 3.12

    • PyPy 3.8

    • PyPy 3.9

    • PyPy 3.10

  3. Clone repository with submodules

    git clone --recurse-submodules https://github.com/reagento/adaptix
    

    If you already cloned the project and forgot --recurse-submodules, directory benchmarks/release_data will be empty. You can fix it executing git submodule update --init --recursive.

  4. Create venv and run

    just bootstrap
    
  5. Run main commands to check that everything is ok

    just lint
    just test-all
    

Tools overview#

Venv managing#

Bootstrap#

Initial preparation of venv and repo for developing.

just bootstrap
Deps sync#

Sync all dependencies. Need to run if committed dependencies are changed.

just venv-sync
Compile dependencies#

Compile raw dependencies (requirements/raw/*) into file with locked versions via pip-tools.

just deps-compile

By default, pip-tools try keep previous locked version. To upgrade locked dependencies use:

just deps-compile-upgrade

Linting#

Run linters#

Run all linters. Should be executed before tests.

just lint

Testing#

Run basic tests#

Sequentially run basic tests on all python versions. It is useful to rapidly check that the code is working.

just test
Run all tests#

Parallelly run all tests on all python versions.

just test-all
Run all tests (sequentially)#

Sequentially run all tests on all python versions. Failed parallel runs can have unclear output.

just test-all-seq
Produce coverage report#

Create coverage report. All coverage reports will be merged into coverage.xml file at working directory. You can import it to IDE. Instruction for PyCharm.

just cov

Documentation#

Build documentation#

Generate html files with documentation. Output files will be placed in docs-build/html.

just doc
Clean generated documentation#

Clean generated documentation and build cache. Sometimes sphinx can not detect changes in non-rst files. This command fixes it.

just doc-clean