directory.archive

Attributes

UnknownFieldType

UNKNOWN_FIELD

Exceptions

DirectoryFileNotFound

File not found.

Classes

_Sentinel

Create a collection of name/value pairs.

FieldParser

Parses records read by the directory archive reader.

DirectoryArchiveReader

Reading part of DirectoryArchive.

DirectoryArchiveWriter

Writing part of DirectoryArchive.

DirectoryArchive

Offers the ability to read/write a directory and its entries to a

DirectoryZipArchive

Offers the same interface as the DirectoryArchive, additionally

Module Contents

directory.archive.UnknownFieldType: TypeAlias = 'Literal[_Sentinel.UNKNOWN_FIELD]'[source]
class directory.archive._Sentinel(*args, **kwds)[source]

Bases: enum.Enum

Create a collection of name/value pairs.

Example enumeration:

>>> class Color(Enum):
...     RED = 1
...     BLUE = 2
...     GREEN = 3

Access them by:

  • attribute access:

>>> Color.RED
<Color.RED: 1>
  • value lookup:

>>> Color(1)
<Color.RED: 1>
  • name lookup:

>>> Color['RED']
<Color.RED: 1>

Enumerations can be iterated over, and know how many members they have:

>>> len(Color)
3
>>> list(Color)
[<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]

Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.

UNKNOWN_FIELD[source]
directory.archive.UNKNOWN_FIELD[source]
exception directory.archive.DirectoryFileNotFound(file_id: str, entry_name: str, filename: str)[source]

Bases: FileNotFoundError

File not found.

file_id[source]
entry_name[source]
filename[source]

exception filename

class directory.archive.FieldParser(directory: onegov.directory.models.Directory, archive_path: pathlib.Path)[source]

Parses records read by the directory archive reader.

fields_by_human_id[source]
fields_by_id[source]
archive_path[source]
get_field(key: str) ParsedField | None[source]

CSV Files header parsing is inconsistent with the the internal id ( field.id) of the field. The headers are lovercased, so that the first will not yield the field, the second will also not success because characters like ( are not replaced by underscores.

parse_fileinput(key: str, value: str, field: onegov.form.parser.core.FileinputField) onegov.core.utils.Bunch | None[source]
parse_multiplefileinput(key: str, value: str, field: onegov.form.parser.core.MultipleFileinputField) tuple[onegov.core.utils.Bunch, Ellipsis][source]
parse_generic(key: str, value: str, field: onegov.form.parser.core.ParsedField) object[source]
parse_item(key: str, value: str) tuple[str, Any | None] | UnknownFieldType[source]
parse(record: SupportsItems[str, str]) dict[str, Any | None][source]
class directory.archive.DirectoryArchiveReader[source]

Reading part of DirectoryArchive.

path: pathlib.Path[source]
read(target: onegov.directory.models.Directory | None = None, skip_existing: bool = True, limit: int = 0, apply_metadata: bool = True, after_import: Callable[[DirectoryEntry], Any] | None = None) onegov.directory.models.Directory[source]

Reads the archive resulting in a dictionary and entries.

Parameters:
  • target – Uses the given directory as a target for the read. Otherwise, a new directory is created in memory (default).

  • skip_existing – Excludes already existing entries from being added to the directory. Only applies if target is not None.

  • limit – Limits the number of records which are imported. If the limit is reached, the read process silently ignores all extra items.

  • apply_metadata – True if the metadata found in the archive should be applied to the directory.

  • after_import – Called with the newly added entry, right after it has been added.

apply_metadata(directory: onegov.directory.models.Directory, metadata: dict[str, Any]) onegov.directory.models.Directory[source]

Applies the metadata to the given directory and returns it.

read_metadata() dict[str, Any][source]

Returns the metadata as a dictionary.

read_data() Sequence[dict[str, Any]][source]

Returns the entries as a sequence of dictionaries.

read_data_from_json() list[dict[str, Any]][source]
read_data_from_csv() tuple[dict[str, Any], Ellipsis][source]
read_data_from_xlsx() tuple[dict[str, Any], Ellipsis][source]
class directory.archive.DirectoryArchiveWriter[source]

Writing part of DirectoryArchive.

path: pathlib.Path[source]
format: Literal['json', 'csv', 'xlsx'][source]
transform: FieldValueTransform[source]
write(directory: onegov.directory.models.Directory, *args: Any, entry_filter: DirectoryEntryFilter | None = None, query: Query[DirectoryEntry] | None = None, **kwargs: Any) None[source]

Writes the given directory.

write_directory_metadata(directory: onegov.directory.models.Directory) None[source]

Writes the metadata.

write_directory_entries(directory: onegov.directory.models.Directory, entry_filter: DirectoryEntryFilter | None = None, query: Query[DirectoryEntry] | None = None) None[source]

Writes the directory entries. Allows filtering with custom entry_filter function as well as passing a query object

write_paths(session: sqlalchemy.orm.Session, paths: dict[str, str], fid_to_entry: dict[str, str] | None = None) None[source]

Writes the given files to the archive path.

Parameters:
  • session – The database session in use.

  • paths – A dictionary with each key being a file id and each value being a path where this file id should be written to.

  • fid_to_entry – A dictionary with the mapping of the file id to the entry name

write_json(path: pathlib.Path, data: onegov.core.types.JSON_ro) None[source]
write_xlsx(path: pathlib.Path, data: Iterable[dict[str, Any]]) None[source]
write_csv(path: pathlib.Path, data: Iterable[dict[str, Any]]) None[source]
class directory.archive.DirectoryArchive(path: _typeshed.StrPath, format: Literal['json', 'csv', 'xlsx'] = 'json', transform: FieldValueTransform | None = None)[source]

Bases: DirectoryArchiveReader, DirectoryArchiveWriter

Offers the ability to read/write a directory and its entries to a folder.

Usage:

archive = DirectoryArchive('/tmp/directory')
archive.write()

archive = DirectoryArchive('/tmp/directory')
archive.read()

The archive content is as follows:

  • metadata.json (contains the directory data)

  • data.json/data.csv/data.xlsx (contains the directory entries)

  • ./<field_id>/<entry_id>.<ext> (files referenced by the directory entries)

The directory entries are stored as json, csv or xlsx. Json is preferred.

path[source]
format[source]
transform[source]
class directory.archive.DirectoryZipArchive(path: _typeshed.StrPath, *args: Any, **kwargs: Any)[source]

Offers the same interface as the DirectoryArchive, additionally zipping the folder on write and extracting the zip on read.

format: Literal['zip'] = 'zip'[source]
path[source]
temp[source]
archive[source]
classmethod from_buffer(buffer: SupportsReadAndSeek) Self[source]

Creates a zip archive instance from a file object in memory.

write(directory: onegov.directory.models.Directory, *args: Any, **kwargs: Any) None[source]
read(*args: Any, **kwargs: Any) onegov.directory.models.Directory[source]
compress() None[source]
extract() None[source]