directory.archive
Attributes
Exceptions
File not found. |
Classes
Create a collection of name/value pairs. |
|
Parses records read by the directory archive reader. |
|
Reading part of |
|
Writing part of |
|
Offers the ability to read/write a directory and its entries to a |
|
Offers the same interface as the DirectoryArchive, additionally |
Module Contents
- class directory.archive._Sentinel(*args, **kwds)[source]
Bases:
enum.Enum
Create a collection of name/value pairs.
Example enumeration:
>>> class Color(Enum): ... RED = 1 ... BLUE = 2 ... GREEN = 3
Access them by:
attribute access:
>>> Color.RED <Color.RED: 1>
value lookup:
>>> Color(1) <Color.RED: 1>
name lookup:
>>> Color['RED'] <Color.RED: 1>
Enumerations can be iterated over, and know how many members they have:
>>> len(Color) 3
>>> list(Color) [<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]
Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.
- exception directory.archive.DirectoryFileNotFound(file_id: str, entry_name: str, filename: str)[source]
Bases:
FileNotFoundError
File not found.
- class directory.archive.FieldParser(directory: onegov.directory.models.Directory, archive_path: pathlib.Path)[source]
Parses records read by the directory archive reader.
- get_field(key: str) ParsedField | None [source]
CSV Files header parsing is inconsistent with the the internal id ( field.id) of the field. The headers are lovercased, so that the first will not yield the field, the second will also not success because characters like ( are not replaced by underscores.
- parse_fileinput(key: str, value: str, field: onegov.form.parser.core.FileinputField) onegov.core.utils.Bunch | None [source]
- class directory.archive.DirectoryArchiveReader[source]
Reading part of
DirectoryArchive
.- read(target: onegov.directory.models.Directory | None = None, skip_existing: bool = True, limit: int = 0, apply_metadata: bool = True, after_import: Callable[[DirectoryEntry], Any] | None = None) onegov.directory.models.Directory [source]
Reads the archive resulting in a dictionary and entries.
- Parameters:
target – Uses the given directory as a target for the read. Otherwise, a new directory is created in memory (default).
skip_existing – Excludes already existing entries from being added to the directory. Only applies if target is not None.
limit – Limits the number of records which are imported. If the limit is reached, the read process silently ignores all extra items.
apply_metadata – True if the metadata found in the archive should be applied to the directory.
after_import – Called with the newly added entry, right after it has been added.
- class directory.archive.DirectoryArchiveWriter[source]
Writing part of
DirectoryArchive
.- write(directory: onegov.directory.models.Directory, *args: Any, entry_filter: DirectoryEntryFilter | None = None, query: Query[DirectoryEntry] | None = None, **kwargs: Any) None [source]
Writes the given directory.
- write_directory_metadata(directory: onegov.directory.models.Directory) None [source]
Writes the metadata.
- write_directory_entries(directory: onegov.directory.models.Directory, entry_filter: DirectoryEntryFilter | None = None, query: Query[DirectoryEntry] | None = None) None [source]
Writes the directory entries. Allows filtering with custom entry_filter function as well as passing a query object
- write_paths(session: sqlalchemy.orm.Session, paths: dict[str, str], fid_to_entry: dict[str, str] | None = None) None [source]
Writes the given files to the archive path.
- Parameters:
session – The database session in use.
paths – A dictionary with each key being a file id and each value being a path where this file id should be written to.
fid_to_entry – A dictionary with the mapping of the file id to the entry name
- class directory.archive.DirectoryArchive(path: _typeshed.StrPath, format: Literal['json', 'csv', 'xlsx'] = 'json', transform: FieldValueTransform | None = None)[source]
Bases:
DirectoryArchiveReader
,DirectoryArchiveWriter
Offers the ability to read/write a directory and its entries to a folder.
Usage:
archive = DirectoryArchive('/tmp/directory') archive.write() archive = DirectoryArchive('/tmp/directory') archive.read()
The archive content is as follows:
metadata.json (contains the directory data)
data.json/data.csv/data.xlsx (contains the directory entries)
./<field_id>/<entry_id>.<ext> (files referenced by the directory entries)
The directory entries are stored as json, csv or xlsx. Json is preferred.
- class directory.archive.DirectoryZipArchive(path: _typeshed.StrPath, *args: Any, **kwargs: Any)[source]
Offers the same interface as the DirectoryArchive, additionally zipping the folder on write and extracting the zip on read.