adf-obdd

Abstract Dialectical Frameworks solved by Binary Decision Diagrams; developed in Dresden (ADF-BDD)

View project on GitHub

Crates.io docs.rs GitHub Workflow Status Coveralls GitHub release (latest by date including pre-releases) GitHub (Pre-)Release Date GitHub top language GitHub all releases Crates.io GitHub Discussions rust-edition

| Home | Binary | Library| Web-Service | Repository | |— | — | — | — | — |

Abstract Dialectical Frameworks solved by Binary Decision Diagrams; developed in Dresden (ADF-BDD)

This is the readme for the executable solver.

Usage

USAGE:
    adf-bdd [OPTIONS] <INPUT>

ARGS:
    <INPUT>    Input filename

OPTIONS:
        --an                      Sorts variables in an alphanumeric manner
        --com                     Compute the complete models
        --counter <COUNTER>       Set if the (counter-)models shall be computed and printed,
                                  possible values are 'nai' and 'mem' for naive and memoization
                                  repectively (only works in hybrid and naive mode)
        --export <EXPORT>         Export the adf-bdd state after parsing and BDD instantiation to
                                  the given filename
        --grd                     Compute the grounded model
    -h, --help                    Print help information
        --heu <HEU>               Choose which heuristics shall be used by the nogood-learning
                                  approach [possible values: Simple, MinModMinPathsMaxVarImp,
                                  MinModMaxVarImpMinPaths]
        --import                  Import an adf- bdd state instead of an adf
        --lib <IMPLEMENTATION>    Choose the bdd implementation of either 'biodivine', 'naive', or
                                  hybrid [default: hybrid]
        --lx                      Sorts variables in an lexicographic manner
    -q                            Sets log verbosity to only errors
        --rust_log <RUST_LOG>     Sets the verbosity to 'warn', 'info', 'debug' or 'trace' if -v and
                                  -q are not use [env: RUST_LOG=debug]
        --stm                     Compute the stable models
        --stmca                   Compute the stable models with the help of modelcounting using
                                  heuristics a
        --stmcb                   Compute the stable models with the help of modelcounting using
                                  heuristics b
        --stmng                   Compute the stable models with the nogood-learning based approach
        --stmpre                  Compute the stable models with a pre-filter (only hybrid lib-mode)
        --stmrew                  Compute the stable models with a single-formula rewriting (only
                                  hybrid lib-mode)
        --stmrew2                 Compute the stable models with a single-formula rewriting on
                                  internal representation(only hybrid lib-mode)
        --twoval                  Compute the two valued models with the nogood-learning based
                                  approach
    -v                            Sets log verbosity (multiple times means more verbose)
    -V, --version                 Print version information

Note that import and export only works if the naive library is chosen

Right now there is no additional information to the computed models, so if you use –com –grd –stm the borders between the results are not obviously communicated. They can be easily identified though:

  • The computation is always in the same order
    • grd
    • com
    • stm
  • We know that there is always exactly one grounded model
  • We know that there always exist at least one complete model (i.e. the grounded one)
  • We know that there does not need to exist a stable model
  • We know that every stable model is a complete model too

Input-file format:

Each statement is defined by an ASP-style unary predicate s, where the enclosed term represents the label of the statement. The binary predicate ac relates each statement to one propositional formula in prefix notation, with the logical operations and constants as follows:

  • and(x,y): conjunction
  • or(x,y): disjunctin
  • iff(x,Y): if and only if
  • xor(x,y): exclusive or
  • neg(x): classical negation
  • c(v): constant symbol “verum” - tautology/top
  • c(f): constant symbol “falsum” - inconsistency/bot

Development notes

To build the binary, you need to run

$> cargo build --workspace --release

To build the binary with debug-symbols, run

$> cargo build --workspace

To run all the tests placed in the submodule you need to run

$> git submodule init

at the first time. Afterwards you need to update the content of the submodule to be on the currently used revision by

$> git submodule update

The tests can be started by using the test-framework of cargo, i.e.

$> cargo test

Note that some of the instances are quite big and it might take some time to finish all the tests. If you do not initialise the submodule, tests will “only” run on the other unit-tests and (possibly forthcoming) other integration tests. Due to the way of the generated test-modules you need to call

$> cargo clean

if you change some of your test-cases.

To remove the tests just type

$> git submodule deinit res/adf-instances

or

$> git submodule deinit --all

Acknowledgements

This work is partly supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in projects number 389792660 (TRR 248, Center for Perspicuous Systems), the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), and by the Center for Advancing Electronics Dresden (cfaed).

Affiliation

This work has been partly developed by the Knowledge-Based Systems Group, Faculty of Computer Science of TU Dresden.

Disclaimer

Hosting content here does not establish any formal or legal relation to TU Dresden.