A General Specification of an Efficient Java Implementation
for Representing and Interpreting OSF Constraints
Hassan AïtKaci
HAK's Language Technology
All contents Copyright © by
Hassan AïtKaci—All Rights Reserved
Last modified on Sat Sep 15 18:04:22 2018 by hak
This is a highlevel specification for implementing an operational
system in Java for OrderSorter Feature (OSF) constraints. It describes
general syntactic aspects for defining a sort taxonomy and OSF terms
(also known as ψterms) using such a taxonomy. It discusses
considerations for an optimized operational semantics for the efficient
representation and interpretation of expressions using
ψterms. [N.B.: the expression "ψterm" was
introduced originally by the author in his
PhD
thesis (Penn, 1984) as a shorthand for "property and
sort inheritance term," as a formalization of some
parts of Ron Brachman's "SINets," or "Structured Inheritance
Networks" defined informally in the
latter's PhD
thesis (Harvard, 1977).]
In particular, we cover implementation details for encoding partially
ordered sorts using binary codes. We also describe the basic building
blocks of a general execution architecture using this setup to implement
ψterms unification and generalisation. For general user convenience, we
also suggest useful pragmas than can come handy in a practical language
implementation for experimenting with OSF constraints.
Note that this is just an initial specification for guiding the Java
implementation of such a language whose workings are based on an
underlying concrete architecture such as the one we describe. One can
peruse the
initial set of source files that
are available. An actual first language prototype testing some of the
issues described in this specification is realized using the
HLT Jacc grammar metacompiler and HLT language tools
implemented by the author. It is called "OSF V0" (for "Version
0")—see
the source
files.
To make some aspects more tangible, this specification will make
informal reference to OSF V0's specific syntax when discussing general
structural and operational issues. However, this specific syntax is not
binding for future more complete versions of OSF technology, and neither
are all the points discussed, as it is likely that variations and
departures from some details are bound to occur in future
implementations. Nevertheless, the overall structure is expected to
stand as described.
 Declaring a symbol partial ordering, where the symbols are sorts
that denote sets, and the order denotes an
"isa" relation between these sorts (i.e., subset).
Such a basic declaration is of the form s_{1} <
s_{2}., which declares the sort s_{1}
to be a subsort of the sort s_{2}; the shorthand
notation s_{1}, ..., s_{n} < s. is
equivalent to s_{1} < s., ...,
s_{n} < s.; and similarly, s < s_{1},
..., s_{n}. is shorthand for s <
s_{1}., ..., s < s_{n}..
Possibly redundant or cyclic declarations are detected upon sort
encoding (see below). While redundant declarations are harmless and may
simply be reported as warnings, cyclic declarations must be flagged as
errors as they make no sense.
 The syntax of a ψterm of the form: [#X:] s[(
[f_{1} ⇒] t_{n}, ..., [f_{n} ⇒]
t_{n} )], where:
 [...] means that whatever "..." may be
is optional;
 #X is a socalled "tag" that can be thought of as a
logical variable "bound" to (or tagging) the ψterm that
follows it;
 s is a sort expression taking one of the following forms:
 a sort symbol (which denotes a set);
 a sort negation of the form !s, where s is a sort expression (which
denotes its complement);
 a disjunctive sort of the form {s_{1}; ...; s_{n}},
where the s_{i}'s are sort symbols (which denotes their union);
 a sort conjunction of the form s_{1} &
s_{2}, where s_{1} and
s_{2} are sort expressions (which denotes their intersection);
 a sort disjunction of the form s_{1}  s_{2}, where s_{1} and
s_{2} are sort expressions (which denotes their union);
 a sort complementation of the form s_{1}\s_{2},
where s_{1} and s_{2} are
sort expressions(which denotes their relative complementation "s_{1} but not
s_{2}");
 (s), where s is a sort expression.
 the f_{i}'s are feature symbols, and denote functions;
 the t_{i}'s are ψterms.
 A definition is of the
form $Id[(#X_{1},...,#X_{n})] = t.,
where Id is an identifier, t is a ψterm, and
the (optional) #X_{i}'s arguments are variable tags
occurring in t. The name being defined
(i.e, $Id) may not appear in the ψterm defining
it (i.e, t), nor in any other ψterm preceding
this definition (see why). This facility is
only provided as a macrolike convenience and has no formal meaning
other than syntactic shorthand. An occurrence
of $Id(#Y_{1},...,#Y_{n}) is just an
abbreviation for a new copy of the ψterm it stands for
(i.e, t) in which #Y_{i} is
substituted for #X_{i} in t, and all other
tags in t have been consistently renamed afresh. (By
"consistently renamed," we mean that all occurrences of a tag
must be renamed with a same tag.) If a defined name has no tag
arguments, then the parentheses are omitted, writing $Id
rather than $Id(). The specific lexical category of
type $Id is provided for such names not to be confused with
sorts, features, and tags.
 A dyadic ψterm GLB (unification) operation e_{1}
∧ e_{2}, which denotes the intersection of the sets
denoted by each term; and a dyadic ψterm LUB (generalisation)
operation e_{1} ∨ e_{2} that computes the
most specific ψterm subsuming both its arguments (which denotes a
superset of the union of their set denotations). The
expressions e_{1} and
e_{2} are either explicit ψterms or (presumably
defined) names.
 A feature projection operation e/f, where e is a
ψterm or a (presumably defined) name, and f is a feature
symbol. This denotes the ψterm under that feature f of
that ψterm, or @ if it does not possess this
feature.
 If the same feature symbol appears more than once as the argument of a
ψterm, e.g., s(f ⇒ t, ... f ⇒ u), it is just
shorthand for s(f ⇒ t∧u, ...).
 As for the lexical categories:
 The topmost sort is represented by the symbol '@'; it
means "anything" as it denotes the set of all possible
things. The bottom sort is represented by the symbol '{}';
it means "nothing" as it denotes the empty set. They are both
builtin sort symbols. For obvious reasons, neither @
nor {} may appear in a
< declaration.
 Other than @ and {}, a sort symbol may be any
identifier, or a number.
 Number sorts may not appear in a < declaration. They
are implicit subsorts of the builtin sorts Integer and
FloatingPointNumber, which is are both subsorts of
Number, whose only supersort is @. These three
sorts may not appear in a subsort declaration. [N.B.: this
may change in future versions when sort theories are supported as
some < declarations involving these may make sense; for
example, EvenInteger < Integer. In this case, the theory
for EvenInteger would specify that its instances must be
divisible by 2. However, we will disallow it in early versions such
as OSF V0 since it does not support sort theories. We will discuss
this more in detail later.]
 A feature symbol may be any identifier or a nonzero natural
number (i.e., an unsigned positive
integer). The n^{th} missing feature in the argument
list of a ψterm is simply the integer
n. For example foo(bar, boo ⇒ buz, fuz,
hum) is just shorthand for foo(1 ⇒ bar, boo ⇒
buz, 2 ⇒ fuz, 3 ⇒ hum).
 A variable symbol (or tag) starts with the symbol '#'
followed by a nonempty sequence of letter, numeral, or _
(i.e., underscore) characters.
 There are several simplified notations of the syntax of sorts and
ψterms. Here are some a few common such shorthands.
 If a ψterm has an empty body (i.e., it has no
subterms), the parentheses for its body may be omitted
(i.e., #X : s() may be simply written #X :
s).
 A ψterm may be just a tag #X followed by nothing, in
which case this is shorthand for #X : @.
 A disjunctive sort expression containing only one sort is just that
sort; i.e., {s} is simply s.
 An empty disjunctive sort expression is just the empty sort {}.
In this section, we cover in some detail practical aspects for the efficient
representation and use of multipleinheritance in a large sort taxonomy,
and the machinery for encoding partially ordered sorts as binary codes
to
optimize lattice operations on them. We first give the details for
implementing a basic technique based on transitive closure. Then, we develop
data structures and methods for implementing a more elaborate technique
capable of handling very large taxonomies.
Note that no operation (either on sorts or ψterms) may be
performed before the complete set of sorts has been encoded. Sorts
are recorded from their appearances in declarations. However, other
undeclared sorts appearing in ψterms definitions must also be so
recorded. Therefore, any attempt to perform any operation needed
information on sorts will prompt the user on whether compilation
should proceed. (See also the %encode and
%automatic pragmas below.)
Once the sort taxonomy has been encoded, any new sort declaration extending
the taxonomy, or any definition involving a new sort symbol will prompt the
user for the need to reencode the sort taxonomy before evaluating any
expression—unless the taxonomy is explicitly
"deencoded"—i.e. all sort codes are erased. (See also
the %clear
and %extend pragmas below.)
In such a language, the sort taxonomy consisting of all the sorts is
encoded into bit vectors. This does not apply to literal values (such as
numbers, strings, etc.) but only to symbolic sorts. All the symbolic
sorts are numbered in the order in which they appear in a program and stored
in an array called (say) TAXONOMY of objects of
class Sort at the index corresponding to their order of
appearance. [N.B.: In fact, to allow for dynamic growth if needed,
it will be an ArrayList rather than, strictly speaking, an
array. But this is just a detail and we use "array" in this specification
for simplicity.] In this TAXONOMY array, the builtin symbolic
sorts are initialized as follows:
 Integer is stored in TAXONOMY[0];
 FloatingPointNumber is stored in TAXONOMY[1].
 Number is stored in TAXONOMY[2];
The bottom and top sorts ({} and @) are not stored
in TAXONOMY. They are instead defined as constant sorts.
[N.B.: In this specification, we limit the builtins to
numbers, but actual implementations may (and hopefully will) include
other builtin sorts
(e.g., String, Boolean, List, etc.,
...). As well, interpeted operations on literals such as arithmetic,
string manipulation, etc., and related issues will be
discussed later.]
All other nonbuiltin sorts can then be stored from index 3 and upwards
in the order they come. In order to allow for more builtin sorts in
future versions (say, List, Nil,
String, Boolean, True, False,
etc., ...) rather than referring to index 3 explicitly, we
shall define a int constant called MIN_INDEX
denoting the least index available for nonbuiltin sorts in the array
TAXONOMY. Thus, in this specification, MIN_INDEX =
3.
The class Sort has the following fields:
We shall ensure that the builtin sorts are encoded once and for all as follows:
 the code of sort {} has all its bits set to false;
 the code of sort @ has all its bits set to true;
 the code of sort Integer has its
1^{st} bit set to true, and all others set to false;
 the code of sort FloatingPointNumber has its 2^{nd}
bit set to true, and all others set to false;
 the code of sort Number has its 1^{st},
2^{nd}, and 3^{rd} bits set to true, and all others set to false.
Before the complete sort encoding may be performed, a
Sort object is created for each sort in the order is appears
and stored in the TAXONOMY array at the next available index
in TAXONOMY, with its name and this index. As it is created,
a Sort object of index n greater or equal to
MIN_INDEX, is initialized to have the n^{th}
bit of its code set to true.
When all Sort objects are thus stored in the array
TAXONOMY, a transitive closure procedure on all sorts of index
greater or equal to MIN_INDEX must be performed to reflect the
declared ordering for nonbuiltin sorts. In order to do so,
whenever TAXONOMY[i].name has been declared to be a subsort
of TAXONOMY[j].name, then the j^{th} bit
of TAXONOMY[i].code must be set to true.
After that, a standard transitive closure is performed on the codes of
all elements of index higher than MIN_INDEX to obtain the
final set of codes for all the sorts. Using Warshall's
algorithm (see also here), this gives:
int n = TAXONOMY.size();
for (int k = MIN_INDEX; k < n; k++)
for (int i = MIN_INDEX; i < n; i++)
for (int j = MIN_INDEX; j < n; j++)
if (!TAXONOMY[i].code.get(j)))
TAXONOMY[i].code.set(j,TAXONOMY[i].code.get(k) && TAXONOMY[k].code.get(j));
N.B.: The java.util.BitSet
ensures that the size of bit sets is increased as needed. However,
extending the size of a BitSet is done so the new bits added
to it are higherend bits and are initialized to
false. This may be fine for all the sort codes except for that
of @ since it must always have all its bits set to
true. It is also problematic when using Boolean negation
on a code as the upper bits will then be flipped
to true—thus giving way to nonsensical interpretations.
Therefore, it is necessary to limit the size of relevant bits in
a BitCode so as to manipulate only the lower bits that make
sense. In particular, we must ensure that @ has all and only those
bits within this size set to true. This size is the number of
defined sorts. Hence, we must set the code of @ only after all
other codes have been computed so that we know the maximum BitSet
size (i.e., TAXONOMY.size()) used to allow
setting @'s code correctly to BitCode.allSet(TAXONOMY.size()).
We must be careful to make Boolean operations (especially the not
operation) on codes operate only on the bits of index lower
than TAXONOMY.size(). This is another reason why we need to
define
the BitCode
subclass
of java.util.BitSet
to provide for dyadic and, or, and not
Boolean methods that are aware of the size of the codes to
maintain always all others to false .
For large or sparse taxonomies, this encoding method can be further
optimized (see the following section on modulated
encoding).
This section discusses implementation issues for sort encodings is large
taxonomies using a technique
of code
modulation taking advantage of sparse or barely disconnected sort
hierarchies which happen frequently in practice.
Modulated encoding is technique taking advantage of the topology
of the taxonomy. The basic idea of modulated encoding that a taxonomy
can be partitioned into modules of tightly related elements such that
only a sparse number of "isa" links cross over from one module to
another. Then, the set of modules can be viewed itself as a
"meta"taxonomy of module elements that can be encoded using the method
above. The taxonomy inside each module can then be encoded independently
of the others modules. This ends up in providing the sorts with a
lexicographic encoding. This reduces the size of codes at the expense of
slightly less efficient boolean lattice operations. Comparing two
elements first compares the codes of their modules, then if that result
in consistent module, proceeding with the comparison in the resulting
module. The efficiency in code size and code operations depends of the
"quality" of the modulation of the original ordering. This method is
worth using for very large sparsely connected taxonomies (which are very
common). This modulation can be reiterated to have modules of
modules, etc., ...
[To be completed later...]
In essence, once the set of sorts has been encoded as explained
above, it is possible to proceed to interpreting sort expressions
using bit vector operations on the underlying sort codes. However,
there are some practical issues to consider.
As they are being declared, sorts are parsed as identifiers which
are then encapsulated as Sort objects stored in the
TAXONOMY array at their first appearance. Each sort name
must correspond to a unique Sort object. Therefore, a later
appearance of a sort name must be able to retrieve its corresponding
Sort object stored in TAXONOMY. Rather than
sweeping this array each time with a linear search, it is more
efficient to use a hash table called NAMEDSORTS associating
String sort names to the Sort objects they correspond
to. Thus, at the same time a Sort object is stored in the
array TAXONOMY, it is also recorded in the
hashtable NAMEDSORTS as the value of the its unique
(i.e., interned) String name. It is important to have
the equals method and the hash code of a Sort object
depend only on the sort's name, and neither on its
index nor on its code fields as these are both
subject to change during the encoding and reordering of the array
TAXONOMY (see below). For this reason, Sort.name will
always be set to its intern() unique version so that they may
compared with == rather than equals.
It is possible (if only as a mistake) that some isa
declarations introduce cycles in the ordering. Detecting such cycles is
simple once the sort encoding has been performed by transitive
closure. This is explained next.
 Observe that the existence of a cycle in the ordering will imply
that all the sorts in the cycle have equal code after the transitive
closure encoding is performed. This means that all these sorts must
denote the same set, and should be one and only sort. Such a cycle
could either be flagged as an error, or all the sort symbols in this
cycle should be taken as synonymous (in effect taking the quotient
set with respect to the equivalence relation on sorts such cycles
create). In this version, the first option is taken, and such a cycle
is flagged as an error. This is because one presumably would not
intentionally declare a cyclical sort taxonomy.
 In order to detect potential cycles, we proceed
as follows. Once encoded by transitive closure on the sort ordering,
the part of the array TAXONOMY containing non builtin sorts
(i.e., starting at index MIN_INDEX) is sorted using a
topological ordering whereby a Sort object s is said
to precede another Sort object t iff, in this
order:
 s.code is strictly lower than t.code;
or
 s.code == t.code and
s.index < t.index;
or
 t.code is not lower than t.code
(i.e., s and t are unrelated)
and
 s.code.cardinality() == t.code.cardinality() and
s.index < t.index
or
 s.code.cardinality() < t.code.cardinality().
A code c_{1} is deemed lower than a code
c_{2} whenever (c_{1} AND c_{2}) ==
c_{1}. The expression c.cardinality() for a code
c denotes this code's cardinality (i.e., its
number of bits set to true).
Sorting the array TAXONOMY (with,
say, QuickSort)
using the above "precedes" relation will always result in an array where
distinct sorts of equal codes (if any) end up being
contiguous. Therefore, since sorts of equal code form a cycle, cycles
can be detected and reported in one single sweep. This reordering will
also rearrange the sorts so as to reflect the sort taxonomy where lesser
sorts will always be at a lower index than their parents.

Note that, after the above topological reordering of the
array TAXONOMY, a sort object's index field will no
longer correspond to its position in the TAXONOMY array. This is
annoying if we wish to have direct access to a sort by its position in this
array. Resetting each sort index field to match its new position rank in
the TAXONOMY array would not be satisfying either since it would no
longer allow identifying the position of which subsort corresponds to
which true bit in its code (see for example, when computing
the descendants of a sort, or its
height in the taxonomy). For this reason, we shall
maintain a new int[] array called RANK such
that RANK[i] contains the position in TAXONOMY of the sort
of index i. In other words, RANK[s.index] is the position
of sort s
in TAXONOMY; i.e., TAXONOMY[RANK[s.index]] = s.
This RANK array is initialized after sort encoding and reordering
while sweeping the TAXONOMY array checking for cycles.
Sort codes allow fast Boolean lattice operations on the sort taxonomy
so that intersection, union, and complement are optimized. Note
however that such operations may result in a code for which there is no
explicit declared sort. This is fine as long as it is not necessary to
access explicitly declared sorts that appear in the sort ordering.
However, when this is eventually necessary—if only for printing
results—we will need to retrieve which explicit sorts a code
corresponds to. When this code corresponds to an actual sort in
TAXONOMY, then it is returned. Otherwise (i.e., when
there is no explicit sort with such a code), the answer for this code is
the disjunctive set of sorts possessing a code immediately lower than the
given code. In other words, the sort of a nonexisting sort's code is
retrieved as the sort expression that is the union of its maximal lower
bounds.
In order to allow for such retrieval, we shall use a hash table
called CODECACHE, as a cache associating sort codes to the set
of all Sort objects that are its maximal lower bounds of
explicitly declared sorts existing in the encoded taxonomy. If not
stored already, such a set is computed from the TAXONOMY
array.
To compute such a set of maximal lower bounds of a given code, we sweep
the TAXONOMY array upwards starting from index
MIN_INDEX. As we proceed upwards, we collect all lower bounds of
the given code in a set, pruning this set anytime a code higher than one of
its elements is found, which is then added to the computed set. Note that
there is no need to go higher than the first sort whose code is the given
code or that of a super sort of the given code (because the order of the
sorted array ensures that all sorts of higher index are greater or
unrelated). Thus, the final set of Sort objects we end up with is
that of the code's maximal lower bounds, which we can then store in the
hash table cache CODECACHE as an image of the given code for
future potential retrieval of this code's maximal lower bounds without
having to recompute it.
One important note is worth making regarding how to represent sets of sorts
stored in CODECACHE as potentially nonunique sorts associated to
a bit code. Such sets of sorts are also needed for printing out results
when decoding bit codes. It is possible of course to use aggregative
objects of Java type Sort[], ArrayList<Sort>, or
HashSet<Sort>. However, even though any of those would work
for our purpose, they would be unnecessarily wasteful in terms of memory
space and processing time. For example, they would lead to awkward set
operations, which are needed in many of
the taxonomy pragmas used to explore a
sort taxonomy (e.g., computing a
sort's children/parents, or
its siblings, mates,
similars, etc.,...).
So, rather than using such spacebulky and timeinefficient aggregate Java
types, it is simpler and more efficient to use a BitCode. Indeed,
all we need is to represent a set of sorts in canonical form; i.e.,
a maximal disjunctive set of incomparable sorts of the
form {s_{1}; ...; s_{n}}. Hence, a bit code will
precisely represent such a disjunctive sort when
its j^{th} bit is set to true iff
j == s_{i}.index, for all n
= 1,...,n. Retrieving any sort member s_{i} of
such a set is then given simply by TAXONOMY[RANK[j]],
where j is the bit set to true
for s_{i}. If such a bit code c is such
that c.cardinality() == 1 then it corresponds the unique defined
sort given by TAXONOMY[RANK[c.nextSetBit(0)]].
Sort expressions as defined above are essentially
Boolean expressions. Evaluating such expressions can be quite efficient once
the sort taxonomy has been encoded as we can rely on the underlying Boolean
lattice of BitCodes. However, there are a few subtle issues to
consider in order to keep these operations as efficient as they should
be. Following is an overview of a setup for evaluation of sort expressions
that will have been parsed as expressions involving sort names.

Evaluation of a sort expression is done in the context of an evaluation
context. Such a context may be defined as a class called,
say, Context. An instance of such a Context will serve
as a environment of evaluation for expressions. So we will assume such
an instance object called, say, CONTEXT that consists of at
least the three structures defined above:
 the array of sort codes TAXONOMY;
 the table of sorts name NAMEDSORTS;
 the cached codes' table CODECACHE;
Such a context will also contain other components necessary for
accessing global information pertaining to a sort taxonomy and methods
that may need this information. It may be construed as a
compilation/evaluation environment for building, accessing, and using
the shared taxonomic information (such as error manager, display manager,
evaluation stack, etc., ...).
Thus, in particular, such a CONTEXT structure must be
accessible to the parser for declaring and encoding sorts as described
above.
 Concerning the evaluation of sort expressions, the idea is to rely
on the sorts' bit vector codes, once the sort taxonomy has been
encoded. Therefore, a possible evaluation scheme could proceed as a
simple Boolean calculator interpreting sort expressions
in CONTEXT's environment as they are being parsed once the
sorts have been encoded, checked for cycles, and reordered to ease
code retrieval, and the tables NAMEDSORTS
and CODECACHE have been initialized with the declared
sorts.
Indeed, such an interpreter would only require Boolean operations on
codes. Thus, a sort expression could be evaluated from the result of
evaluating its subexpressions. The leaves of such expressions being
sort names, this scheme would have to use the corresponding
BitCodes retrieved from NAMEDSORTS and produce a
BitCode result for each nested sort expressions. Eventually,
the outermost expression would result in a BitCode, which
could be retrieved/cached from the CODECACHE table.
Hovever, such an interpretation scheme necessitates that
nondestructive Boolean operations be used on the bit
codes. Because, if not, this would destroy the sort codes stored
in TAXONOMY. So, it would be a wasteful scheme as it would
require copying each argument a Boolean operation on BitCodes
before performing the operation. Hence, the whole intent of optimal
use of bit vectors for fast inplace Boolean operations would become
moot—not to mention the space waste that such copying would generate.
 Therefore, we need to minimize unnecessary copying while evaluating
a Boolean sort expression on bit codes while ensuring no destructive
side effects on the sort codes in the encoded taxonomy. A
simple compilation scheme can thus be devised in
a CONTEXT's environment. The idea is that, instead on the
naive intrepretation scheme described above that would evaluate each
subexpression as it is read, performing this evaluation can be delayed
until the outermost expression has been parsed and
constructed.
Such a scheme would then compile this outermost expression and its
subexpressions in postfix form so as to enable a stackbased
evaluation. The point is to use a reusable copy of a sort code
whenever possible. This may be done as described next.
 Implement a locking mechanism on a BitCode indicating
whether the bit code may be modified inplace. In particular,
the codes of the encoded taxonomy should all be locked.
 Implement static Boolean methods on bit codes for and,
or, and not (which are all we need) in such a way
that whenever at least one of their arguments is unlocked, reuse
this argument in place and push it back on the evaluation stack
as the operation's result. Otherwise, copy only one of the
arguments before effectuating the operation inplace and reuse
the same unlocked bit code for the result as well. Therefore,
the result of all operations pushed on the evaluation stack will
always be unlocked and thus reusable in place.
 After that, all other operations on nonleaf arguments in the
expression will be garanteed to need no argument copying since
one of their arguments will always be reusable.
With such a scheme, evaluating a postfix expression pushed on the
evaluation stack would be such that the minimal amount of copying
would be ensured. The final result of evaluating the expression so
compiled into a postfix expression stack ends up being the last value
pushed on the evaluation stack when reaching the bottom of the postfix
expression stack, and this value is always unlocked. So it may then be
locked if needed and returned.
This scheme results in just one code copy per dyadic operation having
all their arguments be declared sort symbols. For example, only one
copying is required for evaluatiing the disjunctive
sort {s_{1}; ...; s_{n}} for any n
(that of s_{1}). The sort
expression (s_{1} & ... & s_{n})\(t_{1} 
...  t_{m}), where the s_{i}'s
and t_{j}'s are sort symbols, will require only 2
copying for any n and m
(those of s_{1} and t_{1}).
Before being interpreted, a ψterm is represented as a data
structure, which is described next. Such a structure is built by parsing
its syntax, ensuring that all such a structure's properties are kept
consistent with its definition prior to being interpreted.
A ψterm structure is built out of the syntactic expression
described above. Such a syntactic structure is represented as
an instance of the class PsiTerm. The basic structure of this class
consists of:
 a field tag of type Tag;
 a field sort of type SortExpression;
 two tables:
 a field positions of type ArrayList containing
ψterms for its positional subterms;
 a field features of type HashMap associating features
of type Feature to ψterms for its featured subterms.
A tag is essentially a logical variable. In other words, it may be unbound, or
bound to another tag.
Its essential structure consists of:
 a field name of type String, which identifies the tag's name;
 a field ref of type Tag, which identifies a tag that it is
bound to. As in Logic
Programming, we will follow the convention that an unbound tag is
selfreferential.
 A tag also has a field term of type PsiTerm which,
when not null, refers to the ψterm of which it is
the tag field. A tag with a null term field
is called a "free" tag, which means that it is not the tag of
any ψterm (although it may be bound to one that is).
Thus, a tag uses a Tag deref() method that follows the chain
of ref links until reaching the ultimate unbound (i.e.,
selfreferential) tag, which it returns. Therefore, anytime accessing a
tag, it must be dereferenced to get to its actual identity.
A feature is just a name—and could thus be represented as
a String. However, for reasons that will become apparent when we
introduce typing constraints for features, we represent them as instances of the
class Feature, which (for now) has only one field name, of
type String.
Building a ψterm structure by a parser from its written syntax must
essentially ensure that (1) features and positions are functional
(i.e., that a given feature or position appears at most once as
argument of the ψterm); and, (2) tagging is consistent
(i.e., that a given tag refers to the same ψterm
(sub)structure no matter where it occurs). Since this must be done
prior to sort interpretation, we proceed as follows.
If a feature (or position) that has already been built as a ψterm's
argument (say, as f ⇒ t) occurs again as an argument to
the same ψterm (say, as f ⇒ t'), then the structure
of t' is merged into that of t by:
The same subterm merging procedure is applied for multiply occurring
tags. In that case, a subterm is first built using a new tag, then this
newly built subterm is merged with the reoccurring tag's term. This
ensures consistent term sharing, including circular coreferences.
For example, parsing the following ψterm syntax:
#P : person
( id => @
( first => "John"
)
, id => name
( last => #S
, first => string
)
, spouse => married_person
( address => #A : location
)
, spouse => @
( id => name
( first => "Jane"
, last => #S : "Doe"
)
, id => name
( first => string
)
, spouse => #P : married_person
( address => #A
)
)
)
will yield the following uninterpreted ψterm structure:
#P : person & married_person
( id => name
( first => "John" & string
, last => #S : "Doe"
)
, address => #A : location
, spouse => married_person
( id => name & name
( first => "Jane" & string
, last => #S
)
, spouse => #P
, address => #A
)
)
Note that, because of term merging and the fact that a ψterm is
uniquely identified by its tag, the "true" identity of a ψterm must
be retrieved through its dereferenced tag. This is especially
relevant when accessing subterms of a ψterm. Thus, just as for
tags, a PsiTerm deref() method is provided for this
purpose. For a ψterm t, invoking t.deref() is
equivalent to invoking t.tag().deref().term(). Therefore,
anytime accessing the actual structure of a ψterm, it must first be
dereferenced to get to its actual identity.
Once built, an uninterpreted ψterm structure may be interpreted by
evaluating the sort expressions at each node. In order to do that, the
sort taxonomy must first be encoded. If the intepretation process
yields {} for any node, then the whole ψterm's value
is {}.
Once interpreted, the prettyprinter must be adapted to decode the
sort expression's values at each node resulting from their evaluation.
[To be completed later...]
This is a facility only meant as a pragmatic convenience for a user to avoid
having to type in a large ψterm several times if needed. A name
definition associated to a ψterm has global scope. However, each
occurrence of a defined name in an expression stands for a new ψterm:
the one it defines up to renaming of its tag variables. In particular
several occurrence of the same name will not share variables before
interpretation. (After interpretation, some of these tag variables may get
unified.) Indeed, the scope of defined names is global while tag variables
appearing in an expression are local to that expression. In order to avoid
infinite recursive expansion issues, such a defined name may not be allowed
to appear inside an expression definition before being itself
defined—including in its own definition expression.
[To be completed later...]
Next, we elaborate on operations meant as facilities for a user to explore
or set parameters of an OSF sort definition and evaluation
environment. These are dubbed pragmas as they are, strictly speaking,
not componets of an OSF language, but pragmatic tools related to an OSF
envirironment. They take the form of extralinguistic operators whose
effect is to set parameters of, or return information about, the state an
OSF environment. Such a pragma is a builtin construct of the
form %pragma where "pragma" is the name of the pragma,
possibly followed by arguments.
Taxonomy pragmas are operators meant to explore a declared sort
taxonomy. Useful pragmas for exploring taxonomic properties are listed
below. This is neither a complete list, nor are all of these guaranteed to
be provided due to nontrivial computational complexity required by some
(e.g.,
%width).
 %size—This prints the number of defined sorts in the
sort taxonomy.
 %sorts—This prints the array of all defined sorts and
their codes.
 %height s—This prints the size of the longest
isa sort chain from {} up to to s. Thus, the
height of the complete taxonomy is given by %height @, or
simply %height (i.e., without argument).
 %depth s—This prints the size of the shortest
isa sort chain from @ down to s. Thus, the depth
of the complete taxonomy is given by %depth {}, or
simply %depth (i.e., without
argument). Clearly, %depth {} and %height @ must be
equal, and equal to %depth s + %height s for any
sort s.
 %width s—This prints the size of the widest
isa sort antichain containing s; that is, the
size of the largest set of mutually unrelated sorts having s as
one of its elements. (Note that this is not necessarily the size of the
set computed by %unrelateds s; but it
will always be greater or equal to the size of %unrelateds s.)
The width of the complete taxonomy is given by %width
(i.e., without argument).
 %children s—This prints the set of maximal strict
lower bounds of s (i.e., the set of immediate
subsorts of s), or {} if there are none.
 %parents s—This prints the minimal strict upper bounds
of s (i.e., the set of immediate supersorts
of s), or @ if there are none.
 %minimals—This is equivalent to %parents {}
or %heirs @.
 %maximals—This is equivalent to %children @
or %founders {}.
 %descendants s—This prints the set of strict lower bounds
of s, or {} if there are none.
 %ancestors s—This prints the set of strict upper
bounds of s, or @ if there are none.
 %heirs s—This prints the set of miminal non{}
strict lower bounds of s (i.e., the subset of
its descendants immediately above {}), or {} if there are none.
 %founders s—This prints the set of maximal non@
strict upper bounds of s (i.e., the subset of
its ancestors immediately below @), or @ if there are none.
 %isa s t—This returns true if s denotes
a subsort of t, and false otherwise.
 %related s t—This returns true if
either s is a subsort of t or t is a
subsort of s, and false otherwise.
 %unrelated s t—This returns true if
neither s is a subsort of t nor t is a
subsort of s, and false otherwise.
 %unrelateds s—This prints the set
of maximal sorts that are incomparable to s (i.e., the
maximal antichain containing s).
 %sibling s t—This returns true if s
and t have the same parents, and false
otherwise.
 %siblings s—This prints the set of maximal sorts that have
the same parents as s, including s itself.
 %mate s t—This returns true if s
and t have the same children, and false
otherwise.
 %mates s—This prints the set of maximal sorts that have the
same children as s, including s itself.
 %similar s t—This returns true if s
and t have both the same parents and children, and false
otherwise. Note that this does not mean that s and t
are equal sorts.
 %similars s—This prints the set of maximal sorts that have
both the same parents and children as s, including s
itself.
The children of a sort are its maximal lower bounds—except for bottom,
which is always childless: its children are just itself ({}).
In order to compute the children of a sort, given its code c,
we need to sweep the TAXONOMY array starting from {}
up to, but not past, the first sort whose code is equal to or greater
than c. As we do so, we collect any defined sort whose code is
a strict lower bound of c, removing from the collected set any
sort subsumed by a previously socollected sort. The reason why we need
not proceed past the first sort whose code is equal to or greater
than c is thanks to the postencoding reordering of
the TAXONOMY array, which guarantees that any sort of higher
rank will have a code that is either incomparable to or contains the
given code.
The children of a sort expression are defined as follows.
 If the expression is just a sort symbol, there is no need to
evaluate it and its children are computed as explained above.
 Otherwise, the expression is evaluated and if its value is the code of
a single sort then this is again treated as the previous
case.
 Otherwise, the set of maximal lower bounds of the expression's
value code is computed and displayed.
The parents of a sort are its minimal upper bounds—except for top,
which is always parentless: its parents are just itself (@).
Computing the parents of a sort is done similarly as for the children,
only starting from @ and proceeding downwards down to, but not
past, the first sort whose code is equal to or less than c. As
we do so, we collect any defined sort whose code is a strict upper bound
of c, removing from the collected set any sort subsuming a
previously socollected sort.
The parents of a sort expression are defined similarly as for the
children of a sort expression, only dually.
 If the expression is just a sort symbol, there is no need to
evaluate it and its parents are computed as explained above.
 Otherwise, the expression is evaluated and if its value is the code of
a single sort then this is again treated as the previous
case.
 Otherwise, the set of minimal upper bounds of the expression's
value code is computed and displayed.
[To be completed later...]
[To be completed later...]
[To be completed later...]
The heirs (resp., founders) of a sort s are easily
obtained from the minimals (resp., maximals), which are given
by %parents {} (resp. %children @).
For %heirs s, this is done by keeping only those sorts in the set
of minimals whose index is set to true in s.code.
For %founders s, this is done by keeping only those sorts in the
set of maximals whose code has the bit corresponding to s.index
set to true.
Computing the set of descendants of a sort in a fully encoded taxonomy is quite
easy since they correspond to the true bits in its code, except for
that sort itself. Therefore, it is given by the following formula:
 descendants(s) = s.code.copy().set(s.index,false).
Contrary to the set of descendants, computing ancestors of a sort s
in a fully encoded taxonomy is not so obvious. Indeed, the method
described above relies on the fact
that s.code was obtained by transitive closure. So the positions of
the true bits in s.code give the ranks of all and only the
descendants of s.code.
For ancestors, it is not possible to proceed similarly by relying on
the false bits of s.code. Indeed, these bits only say
that the sorts positioned at these ranks are not subsorts
of s. In other words, they would only give a superset of the
set of the ancestors of s.
However, thanks to the postencoding reordering done on TAXONOMY,
we know that all the ancestors of s, if any, must be positioned at
a higher rank than that of s in TAXONOMY. Therefore, it
is sufficient to sweep all sorts at a rank position strictly
above s's rank position in TAXONOMY and collect any sort
that has its bit corresponding to s's index set
to true. This is done as follows:
 BitCode ancestors = new BitCode();
 int index = s.index;
 int rank = RANK[index]+1;
 for (i = TAXONOMY.size(); i>rank;)
if (TAXONOMY[i].code.get(index));
ancestors.add(TAXONOMY[i].index);
Then, upon termination, ancestors(s) = ancestors.
Computing the height of a sort in a fully encoded taxonomy can by done by dynamic
programming as follows.
 Given a sort s, height(s) is defined as the
length of the longest isa chain from s
to {}.
 Therefore, it is given by the following recursive formula:
 if s == {} then height(s) = 0;
 else height(s) = 1 + max
{ height(TAXONOMY[RANK[i]])  s.code_{i} == true & i =/= s.index }.
Then, the height of the whole taxonomy is given by height(@).
In order to make this computation more efficient, we can prevent
multiple recomputation of the height of each sort in the above recursive
formula by setting a new field in the class Sort
called height to a sort's height as soon as it is computed.
Computing the width of a poset is known to be
a difficult
problem.
[To be completed later...]
We also provide for other miscellaneous useful pragmas.
There could certainly be other useful pragmas that may be added to this basic set.
Dealing with literal constants of builtins sorts (such as numbers,
strings, etc.), requires special care.
In order to allow elements of large or infinite sets of values such as
integers, floats, strings, etc, we need to provide a means to
account for such values without the need of treating each element as
a bona fide sort. Indeed, this would be unfeasible to encode
elements of infinite sets (e.g., integers), or grossly
inefficient for large finite sets as it would require a large number of
codes of large size, each bearing only one single true bit.
Another issue regarding elements is that they may be aggregated
using a monoid composition law—i.e., a dyadic
associative operation with a neutral element. For example, for number
elements: (+,0), (×,1),
(min,∞), (max,+∞), etc., ...
To deal with both issues, a simple data structure representing an
element sort can be used as a subclass of Sort bearing, in
addition to the sort's bit code, a value and a monoidal composition
law. The former indicates the sort of the element, and the latter how to
compose two elements of compatible sorts if ever unified. By default,
this composition law is equality—i.e., whenever the elements
have a value, it must be the same. If another law is specified and both
elements have compatible sorts and each has a value, then the two
elements are composed using it and the sorts intersected as normal sort
conjunction.
[To be completed later...]
[To be completed later...]
A polymorphic sort is a sort that takes another sort as
argument—e.g., set(α),
list(α), etc., ...—where α is
a sort variable. Such a sort may have its sort variable be
instantiated by any existing
sort—e.g., set(person), list(set(integer)), etc.,
...
[To be completed later...]
A constraint attached to a the sort in the taxonomy specifying how
features are sorted and equations among feature path is called an
OSF theory.
It first puts ψterms into a socalled dissolved form. Dissolving
ψterm transforms its graph structure into a set of labelled triples
representing its edges. This is pretty much similar to the way RDF triples
are obtained from their XML representation syntax that refer to other XML
structures using tag attributes. In fact, this can be a standard internal
format for ψterms as well as closely related graphdata formats such as
semistructured
data representations,
LinkedData, JSON,
JSONLD,
etc., ...
[To be completed later...]
This addresses a method of compiling dissolved ψterm (especially
for unification) using
a WAMstyle
abstractmachine.
A basic method is described in
this technical
report. An implementation of the method done
in LIFE is
described here. This
method can be extended
to compile OSF
theories.
However, these methods have to be spelled out for an efficient Java
implementation.
[To be completed later...]
[To be completed later...]
[To be completed later...]