How to Not Verb

How to Not Verb

Author: Logan Kearsley

MS Date: 09-18-2019

FL Date: 04-01-2020

FL Number: FL-000067-00

Citation: Kearsley, Logan. 2019. «How to Not Verb.»

FL-000067-00, Fiat Lingua, . Web. 01 April 2020.

Copyright: © 2019 Logan Kearsley. This work is licensed

under a Creative Commons Attribution-
NonCommercial-NoDerivs 3.0 Unported License.

Fiat Lingua is produced and maintained by the Language Creation Society (LCS). For more information
about the LCS, visit

How to Not Verb

A Guide to Freeing Yourself from Other Languages’ Categories & Inventing Your Own

What Does It Mean to Have a Category?

Defining a Cross-Linguistic Verb

A Review of Verbless Languages


Cho’ron & Gogido
Dyrel & Duojjin
Riau Indonesian


Creating Novel Categories

Oneida (


Eliminating certain parts of speech or certain grammatical categories is fairly easy, and a good way for a
conlangers to create constraints for themselves that can inspire greater artistic creativity. Many natlangs, for
example, get along just fine without a distinct class of adjectives, and eliminating them—and thinking about just
how you will express the same meanings in other ways—is a good way to ensure that, e.g., you won’t be
making an accidental relex of English, and to force yourself to explore some new avenues of expression
(presuming of course, that you are not a native speaker of a natlang that lacks adjectives already).


The slightly more advanced conlanger may even notice that some languages have additional parts of speech

that Standard Average European lacks—the classifiers or counter words found in some Asian languages, for
example. What is quite rare, however, is for a conlanger to discard
of the familiar categories, or to create

something entirely new.


The first hurdle to get over is, of course, recognizing that such things could even be possible. It is a
widely-held belief, for example, that any fully functional language must obviously have at least nouns and

1 Unless otherwise specified, throughout this article, the term «category» refers only to »
categories such as «game»).

category» (not e.g.


verbs! Many reference works take that as an uncompromisable article of faith . And yet, many of the Salish
languages of the American Northwest have been seriously analyzed as having
disagreement about this analysis among experts in the field, but suppose we believe it—surely it is not possible
to have a language that lacks

. There is some
no nouns at all




And yet, several conlangers have indeed made the attempt. And, shocking as it may seem, there are even

Luiseño (another Native American language) if you
natlang precedents: you can, for example, see verbs in
already know what verbs are, and squint a bit—but the internal logic of the language itself does not require
them! The functions of verbs and nouns are divided up in different ways in that language, producing an entirely
different set of new and unique parts of speech. Additionally, linguist David Gil has argued that there is a large
subset of Riau Indonesian, making up a significant proportion of all colloquial conversation in that language,
which requires no distinction between parts of speech
. What remains are special classes of function
at all

words, again quite different from what most Anglophone or European conlangers would be familiar with.

How do these languages work? And how can you learn to think outside the mold, and invent your own new

ways of dividing up the work in your own language’s morphology and syntax?

What Does It Mean to Have a Category?

If we are to try to create a language without verbs (or any other grammatical category), we must first know
what it means to “have verbs”, or to lack them. This may seem obvious to many readers, but I expect that quite
a few of you actually disagree; one Reddit commenter went so far as to say

“Does any of those make relation distinctions between nouns? If so I’d say they are not verbless” 5

By that criterion, prepositions are verbs—but I think most readers would disagree with that conclusion!

There is a certain point of view which would say that it is trivially easy to create language without verbs (or

any other part of speech). Any language that isn’t a direct relex of English doesn’t have verbs, by definition;
“verb”, after all, is an English word, referring to a particular English lexical category. Russian, by comparison,
does not have “verbs”; it has “глаголы”, which are a whole lot like verbs in many ways, but quite different
from them in others. When we tell English-speaking students of Russian that they are going to learn about
Russian verbs, we are merely constructing a convenient fiction, a lie-for-children, which will assist in teaching
by taking advantage of a metaphorical connection between Russian “глаголы” and a similar thing that the
students are already familiar with.

This point of view is called
non-apriorism ,

) categories do not actually exist, and that the structure of any language is
a priori

pre-existing theoretical (
entirely self-contained. Categories like parts of speech can be defined only by contrast with other categories in
the same system; since each language is its own system, each language has its own categories, and
cross-linguistic typology is, in fact, impossible.

. In other words, it assumes that
non-aprioristic structuralism


Eloise Jelinek and Richard A. Demers,

, available at
SIL Glossary of Linguistic Terms

3 See e.g
4 See
Language​, Vol. 70, No. 4 (Dec., 1994), pp. 697-736, available at
Heiko Narrog (eds.), July 2008, available at

byu/gliese1337 from discussion

Martin Haspelmath,

, in
Framework-free grammatical theory

Predicates and Pronominal Arguments in Straits Salish

The Oxford Handbook of Grammatical Analysis​, Bernd Heine &

This was the view taken by such influential linguists as Franz Boas and Ferdinand de Saussure. And it is, in
some contexts, quite useful. Believing in it prevents such travesties as made-up rules against split infinitives or
applicable to
stranded prepositions in English—a direct result of assuming that Latin categories are

all languages! Taken to an extreme, however, it can make trying to understand a new language unnecessarily
difficult (as can be attested by everyone who has ever complained about Lojban’s insistence on the use of its
own native grammatical terms), and it makes the appreciation and productive criticism of conlangs nearly
impossible. If every language is completely unique anyway, then what can we possibly say about it other than
“it seems to work, or not”, and what point is there in even

to do anything really groundbreaking?


The exact opposite of non-apriorism is, of course,

apriorism . This is the philosophy behind Chomskyan
Universal Grammar. It presumes that all possible grammatical categories already exist, independently of any
particular language, and that languages differ primarily in which pre-existent categories they happen to use—or
which mental switches are turned on. In principle, this point of view should make it very easy to compare
languages—you just have to figure out what the cross-linguistic categories are, and then compare which ones
are “turned on” or “turned off” in different languages. Even if we believe the arguments that, say, Lillooet lacks
, that just means we need to do a little more work to

figure out what all of the categories are, and which ones are truly universal.
point of view for a certain stripe of conlanger to take; it’s what makes it possible to write things like automatic
random phonology generators, morphosyntax generators, and sound-change appliers.

Luiseño lacks verbs, and Riau Indonesian lacks

This can even be a quite useful

In practice, however, both parts of aprioristic typology—identifying the existence of cross linguistic
categories, and identifying their exponents in any particular language—turn out to be rather difficult in many
cases. To quote Martin Haspelmath, “The idea that language-specific categories are equated with
cross-linguistic categories has given rise to countless category-assignment controversies.” What’s the line
between an enclitic and a suffix? Is “black box” a phrase or a compound? Does Tagalog mark topics, or
subjects? Does Mandarin have adjectives, or just stative verbs? And so on.


At its worst, apriorism results in simply assuming that the categories of English (or Latin, or whatever the

linguist’s or conlanger’s preferred language is) must apply to every new language, and forcing them to fit,
whether or not they make any sense.

Fortunately, there is a middle path:

comparative non-aprioristic typology . This point of view recognizes
that every language does have its own categories , but that large areas of overlap do exist and it is useful to
re-use terms between languages for different categories that are nevertheless substantially similar in significant
ways—as long as we are clear that, in doing so, we are not implicitly claiming that two different languages are
exactly the same.



In deciding whether or not a particular language has a particular named category, then, we must decide

whether it has a category whose functions overlap to a sufficiently large extent with categories in other
language for which we use the same label. And there is room for legitimate disagreement over how much
overlap is “enough.” This largely comes down to whether you are a more of a “lumper” or a “splitter”. Even if
you reject apriorism, if you’re the kind of person who likes finding similarities and putting lots of things into a
few large, sweeping categories, it’s likely that you will find verbs everywhere, and be harder to convince of
their absence. On the other hand, if you’re the kind of person who likes to create lots of fine divisions, I think
you’re more likely to accept more edge-cases as genuine examples of verblessness.


10 For a conlanger, this also suggests that your language’s categories, even if familiar, can—and perhaps

—be a little quirky.

How do we identify a language’s own internal categories, without reference to

to identify

grammatical categories, or parts of speech, there are two primary types of criteria:

categories? In order
a priori

Distributional (Syntagmatic)

— Is there a group of words that all appear in the same kinds of environments, relative to other types of

words and phrases, which other types of words don’t appear in?

This is easiest to test for by constructing a

—a fixed sequence of words which would be

grammatical, except that there is a hole in it. If two words can fit into the same hole, they are likely to be the
same part of speech. Usually, testing just one frame isn’t enough on its own; the frame “I like _ apples”, for
example, would identify “the” and “red” as being in the same category, but we know that they are not. This
is especially true when we have no preconceived notions at all of what categories a particular language
might have, as that makes it difficult to intelligently construct test frames that are sufficiently
discriminating. Nevertheless, if the same group of words consistently displays the same behavior in many
different frames, you have good evidence that they form a distinct part of speech.

Morphological (Paradigmatic)

— Is there a group of words that all seem capable of undergoing the same kinds of morphological
processes (taking the same sets of affixes, for example), or that have the same set of forms available (i.e.,
the same inflectional paradigm)?

This can be an extremely useful criterion in highly inflecting languages, like Latin. More isolating
languages, like English or Vietnamese, can make morphological analysis more difficult, but derivational
morphology can also be a rich source of information about possible lexical classes.

Hypotheses about parts of speech can also be supported by

evidence—is there a group of words

that all seem to “do the same thing” or “have the same purpose” in some way? Ideally, all three of these types of
criteria will reveal the same, or very similar, consistent groupings of words, and the more sources of evidence
you have for a group, the better. It is important to note, however, that
useless, and never constitutes sufficient evidence for identifying a new grammatical category on its own. If you
are not aware of how semantic restrictions impact both distribution and morphology, you can be tricked into
identifying categories that aren’t really needed.

evidence is often completely

For example, if we test English words with the frame “I told him a(n) _”, then it looks like words like
“story”, “tale”, “joke”, “answer”, etc. belong to one distributional class, while words like “avalanche” or
“apple” belong to another. But, this has nothing to do with grammar— it’s a simple consequence of the fact that
some words, which may nevertheless be of the correct category, have meanings which simply don’t make any
sense in the given context. The inadequacy of semantic categorizations can also be demonstrated by finding
counter-examples to the typical semantically-oriented definitions of English parts of speech as taught in
schools; e,g, that a noun is “a person, place or thing”. How, then, would we classify “emptiness”? It’s not a
person, nor a place, nor a thing—indeed, it’s the very
so we call it a noun.

like a noun anyway, and

of a thing!—yet it

When it comes specifically to deciding whether a language has a particular part of speech, there is an
category, while still retaining the

additional wrinkle to consider: a language may lack a particular
category. In other words, it may be possible to construct phrases which correspond to particular parts

of speech in translation, while lacking any individual words of that category. This is in fact the case with

analyses of Salish languages that lack nouns—they do not lack noun phrases , it’s just that any noun phrase
must be composed of an article (or other determiner) and a relative clause or a nominalized predicate. For a less
category of “clauses”—we can put multiple
exotic example, English is usually considered to have a

words together to create phrases of type “clause”, which have a particular syntactic distribution distinct from
other kinds of phrases—but there are no
category or part of speech. When coming up with new categories, it is comparatively trivial to invent new kinds
of phrases—syntactic categories—to go along with new kinds of words. When eliminating categories, however,
it is often much easier to, e.g., “get rid of verbs” or “get rid of nouns” than it is to “get rid of verb phrases” or
“get rid of noun phrases”.

of the category “clause”, and so “clause” is not a lexical


So, if you, as a conlanger, attempt to eliminate a certain category from your language, or to create a new one

verbs” or “that ‘new category’ is just a kind of adjective” or the like, you then have
are too

that does not exist in any language you are familiar with, and someone then criticizes your design on the
grounds that “those
two options: First, you can step back, re-evaluate your design, and determine that, indeed, what you thought was
novel really does still overlap quite a bit with the traditional categories, and revise accordingly. Or, you can step
back, re-evaluate your design, and, having clarified the definitions you are using for that category, decide to
respectfully disagree. You can be happy that you have accomplished your own goals, and been true to the
internal logic of the language, regardless of what a critic might say.

Defining a Cross-Linguistic Verb

Having reviewed what makes a grammatical category in a general sense, we will now consider some
specific definitions for identifying a cross-linguistic category called “verbs”. We will then be able to examine
the lexical behaviors of specific languages, and determine how well any of their lexical categories overlap with
these definitions.


Dixon, in his

, emphasizes multiple times “that word classes must be recognized for
Basic Linguistic Theory

each language on grammatical criteria internal to that language.” However, he later claims that “[i]n a
nutshell—people who say that in language X there is no distinction between noun and verb simply haven’t
looked hard enough.” Frankly, this seems to me to be a clear case of assuming your conclusion; if you are
determined that distinct verbs must exist a priori, and you merely have to look harder, then you will of course
find whatever convoluted string of evidence is necessary to support that point of view! Now, in complete
fairness, Dixon is talking about describing natural languages, and in that context, based purely on the statistical
facts from all of the natural languages that have already been described, assuming that you ought to find both
nouns and verbs wherever you go is not a bad starting point. Claiming that they
first resort! And, later on, Dixon softens that claim to “every human language”, which we might reasonably
paraphrase as “every naturally-evolved human language”— leaving the door wide open for verbless (and
nounless!) conlangs.

exist is a bold thing, not a

Dixon also admits that “It is amply apparent that there is no set of criteria which will serve to recognize

noun and verb classes across all languages.” From a strict non-aprioristic point of view, this is as about as
straightforward an admission of the non-existence of such a universal category as you can get! Nevertheless,
Dixon does provide a summary of general functional and semantic characteristics by which nouns and verbs
may be recognized across languages:

11 Alternatively, if Salish languages do lack noun phrases, they at least have “determiner phrases”. Whether the cross-linguistic DP
hypothesis is valid is beyond the scope of this article.
12 R. M. W. Dixon.

Basic Linguistic Theory​, Vols. 1-2. 2009, Oxford University Press.

1. Can always occur in phrases which serve as the arguments to a predicate.
2. Always include words referring to concrete objects.

1. Can always occur as the syntactic head of a predicate.
2. Always include words referring to actions.

Note that these definitions are

mutually exclusive! It is entirely logically possible to have a single class

of words which can always occur as arguments to predicates, can always serve as the syntactic heads of
predicates, and includes words referring to both objects and actions. Even if we grant Dixon the claim that
genuine internal evidence for both distinct classes
exist in every natural language, we are thus left

wondering how to classify a conlang that has been constructed specifically to

any such evidence.

Now, some readers might wonder, based on those criteria, if Dixon is implicitly rejecting the idea that

semantic content is not sufficient evidence for a category on its own. He is not. There is one important
clarification: “the meaning of a lexeme cannot be used as a criterion for which word class it should belong to.
, their semantic content should be studied” (emphasis added).
after the classes have been established

In other words, once you have determined that lexemes can be divided distributionally into
set of distinct

syntactic classes, it is appropriate to determine the labels for those classes based on semantic tendencies.

Aikhenvald, in

, provides a similar but somewhat simpler definition of a “verb”: verbs
The Art of Grammar

are members of a class which function as the “prototypical” choice for the syntactic head of a predicate . This
has the side-effect that, if there is a merged class, such as discussed above, which satisfies Dixon’s definition
for both “noun” and “verb”, and no
lexical category in the language is more likely to function as the head

of a predicate, or more prototypical for that function, then Aikhenvald’s definition would label that category as
“verbs”. Therefore, any language which has some mechanism of linguistic predication, by definition, has verbs,
even if it has no other parts of speech! Personally, I find that to be a rather vacuous claim. It effectively turns
verbs into the default label for ambiguous classes of predicates, with no a-priori justification for preferring that
label over “nouns”, or any other label. We might as well just say “all languages have content words”, and thus
avoid any possibly unwarranted implications about similarities that the categories of any one language might
have with any other language’s verbs.


The dependence on the syntax of predication, however, is interesting. At this point, it is important to
distinguish linguistic predication from logical or semantic predication. It is reasonable to assume that any
functioning language must be able to express logical predication, in the sense of having the ability to assert facts
about things, and indicate relations between referents. A communication system that lacks such facilities is not a
language at all—merely a set of independent lexemes, more akin to a set of pre-linguistic animal calls. It may
furthermore seem self-evident that any functioning language must also then have some mechanism for linguistic
predication, in the sense of the ability to express an action or state via a grammatical predicate. David Gil,
however, argues that linguistic predication is not, in fact, a fundamental operation: it is made of smaller pieces,
which can be pulled apart and seen in action in other parts of natural languages. Furthermore, Gil claims that
“there do indeed exist languages whose grammars make little or no reference to the notion of predication.” 14

In Gil’s analysis, linguistic predication is a confluence of

Alexandra Y. Aikhenvald,

14 David Gil,

. 2014, Oxford University Press.
The Art of Grammar: A Practical Guide

The Canadian Journal of Linguistics​, Vol. 57, No. 2, (2012) pp. 303-333
Where Does Predication Come From?

1. Thematic role assignment, which we might identify with logical predication—asserting some fact about

a particular argument, possibly including its relation to another entity.

2. Headedness, which is defined as the property of some word in a phrase being singled out as being

similar to the whole phrase in some way. In the case of linguistic predication, we are concerned with the
specific kind of similarity in which the semantic denotation of some specific word is the same as that of
the whole containing phrase—i.e., semantic projection.

A verb (or any other kind of word) thus acts as a linguistic predicate when (and

other words / phrases (or to the entities which they refer to),
phrase. For example, in the English sentence “I like pie”, “like” is serving as a linguistic predicate because it
assigns a role (patient) to “pie”, and the meaning of the phrase “like pie” (and indeed of the entire clause) is
action of liking,

the entity called “pie”.

) it assigns roles to

projects its own meaning as the meaning of the


So what happens when these two features—headedness and logical predication—fail to align? Well, we call
that attribution. If, for example, we turn “like” into a participle, “liked”, we can create a noun phrase “the liked
pie”. Here, “liked” is still assigning
to “pie” as it did in the sentence, but “liked” does not
exactly the same role

control the meaning of the phrase—“pie” does. If, therefore, we come across a language which allows or
requires the independent specification of role assignment and headedness by separate mechanisms, or which
contains logical predication and role assignment but no concept of headedness, then Aikhenvald’s definition of
“verb” will fail.

Finally, we may encounter critics who prefer a more functional definition of these basic word classes. For
example, Croft, in the framework of Radical Construction Grammar, has defined the categories of noun, verb,
and adjective as follows : 15

1. A

2. An

is the head of a referential phrase; i.e., the word in a referring phrase that denotes an object

being referred to.
(cxn) is the head of an attributive phrase; i.e., the word in an attributive phrase that denotes

a property.
is the head of a clause; i.e., the word in a clause that denotes an action that is predicated.

3. A

Note, however, that the two components of the definition for a “verb” are not actually logically equivalent


for natural languages, but as we have just seen, it is logically possible to
may be

in the general case! They
separate the ideas of syntactic headedness, semantic headedness in terms of projection (i.e., which word most
closely determines the meaning of an entire construction), and logical predication. If we take only the second
half of the definition as relevant—that a verb is a word which denotes an action to be predicated—then we have
effectively decided simply to re-use the word “verb” not as a name for a distributional category, but simply as a
shorter alternate label for the functional category of “predicate”—and as explained above, the fact that a
language must be able to express the function of predication is simply tautological. It is part of the definition of
language. We shall this continue to restrict ourselves to examination of ways to eliminate the formal category of
lexical verbs, not the function of predication, which may be performed by formal categories other than verbs
and which is fundamental to the nature of compositional language. Muddying of this functional definition of
verbs, by decoupling the internal components of linguistic predication, will, however, be seen to be a valid
approach in attempting to eliminate the formal category.

15 W. Croft,

Cambridge: Cambridge University Press.
Morphosyntax: constructions of the world’s languages.

16 We may also note that a strict application of this definition would suggest that, e.g., stative verbs are not actually verbs at all, as

they do not predicate an action, thus making it even easier to design a language without verbs—but that is orthogonal to our current

With that background, in the next section I shall review a selection of conlangs and natlangs that are
claimed, by their creator or otherwise, to lack verbs, along with a brief overview of which definitions they
match or avoid matching, and how they function without them.

A Review of Verbless Languages


Kēlen 17
Kēlen, by Sylvia Sotomayor, is almost the prototypical example of a conlang without verbs. Whenever the
question is asked, in nearly any conlanging forum, “can a language exist without verbs?”, there are really only
two common responses: “Of course not”, and “look at Kēlen”.

In my opinion, however, Kēlen is far from the best example available. This is not to say that it is a “bad
language”—Kēlen is extremely well thought-out, and has been the basis for a large corpus of linguistic artwork.
Indeed, it may be the most well-fleshed-out conlang in this list, going well beyond the simple “no verbs”
gimmick; that alone gives it some weight as an argument that, yes, a verbless language is possible. Furthermore,
it is a very early example of the genre; we should not expect it to be a pinnacle of categorical creativity when
there simply was not much else available to draw on for inspiration in the conlanging community at the time.

Kēlen replaces verbs with a closed class of four function words called “relationals”, which specifies a
generic type of relation that exists between noun phrases in the same clause—specifically, existence, change
into a state, transaction between a source and a beneficiary, and part or quality of a whole. Like verbs in many
languages, they are inflected for tense, aspect, and modality, and sometimes to agree with one or more
arguments. More specific relations can be expressed by the choice of appropriate prepositions, but most
functionally-verbal meanings are expressed by combining one of the relationals with a noun that represents an
action or state—e.g., “a run”, “redness”, “grief”.

I personally am ambivalent about whether or not “relationals” actually are “verbs”. The primary arguments

against classifying them as such seems to be that they form a small closed class, and that they are largely
semantically empty. Nevertheless, there are natlangs which have closed-class verbs, such as Japanese and
Chechen, which tend to produce new verb phrases using a “do (something)”-style construction, and the lack of
semantic content behind many auxiliary verbs does not detract from their universal analysis
special ones. At least one author has claimed that the natural Australian language Jingulu has only
—even fewer than Kēlen’s relationals, of which there are four—and while there are reasons to doubt that
analysis, there are other languages, like Yawuru (another Australian language), which have only a dozen or so.
It is at least clear that forming new verb phrases from a small class of “pro-verbs” (analogous to do, make, etc.)
is a common natlang strategy. Kēlen thus simply goes all-out on a verb-phrase construction strategy which,

verbs, albeit



17 For more information on Kēlen, see
18 Alona Soschen,
19 Komei Hosokawa,
University, available at


Biolinguistics​, Vol. 2, No. 2-3 (2008), pp. 196-224, available at
On the Nature of Syntax

, 1991, PhD thesis, Australian National
The Yawuru language of West Kimberley: A meaning-based description

while it may not be used exclusively by any natlang, is used
criticism which Sylvia herself has acknowledged. Quote:

by many natlangs. Incidentally, this is a

One can analyze the relationals as a small closed class of verbs. But then, in natlangs, copulas are
not always verbs, so maybe relationals are copulas.

So perhaps relationals are not in fact verbs after all. Relationals always seem to head predicative

contain any words describing actions. So they may not be verbs by

constructions; but, the class does
Dixon’s criteria, but they are by Aikhenvald’s. On the other hand, an argument can be made that relationals
don’t always assign roles—or at least, they don’t assign
roles, because the proper, more specific roles

for clausal arguments can be provided by the nouns themselves. This is thus a border case. But, while a native
Kēlen linguist may classify them differently, I am personally satisfied that Kēlen at least does have an
equivalent to verb

in its syntax.

AllNoun 20
AllNoun, by Tom Breton, is the other classic entry in the history of verbless language attempts. AllNoun
claims to be exactly what it says on the tin: a language with only one part of speech, all nouns. Unfortunately, it
cheats; AllNoun requires the use of semantically-significant punctuation to flesh out its syntax and specify the
function words, of a different class—or several classes. AllNoun has several other deficiencies as well,
acknowledged by its creator some years after the fact; in particular, the translation of adpositions is fairly
awkward, and it is often unclear how to distinguish between nominals and propositions—i.e., between phrases
that identify referents, vs. statements which can be interpreted as true or false (the defining semantic feature of a
complete declarative sentence). This is a common failing of early loglang attempts, echoed in my own early
work, among others.

nouns. If they were to be spoken, these would almost certainly be interpreted as separate

Nevertheless, it quite handily avoids the existence of verbs, and even of verb phrases. It would take some
severe perversity of mind to force a verbal interpretation on any component of AllNoun. Essentially, AllNoun
avoids the need for verbs by breaking up and separately specifying every role in a proposition, and in turn
naming roles as nouns. Indeed, any noun at all can serve as either a role or an argument. To quote from the
AllNoun FAQ:

Aren’t there really two classes of noun, the «parts» and the «roles»?

No, they really are interchangeable. Words may tend to be more useful as roles or parts, but any
word really can fit in either category.

As a limiting example, consider that in his column (and later book)
Hofstadter once asked, in complete seriousness, «Who is the Dennis Thatcher of America?». By
this he meant, «Who or what in America plays the same role that prime minister Margaret
Thatcher’s husband plays in England?»

, Douglas
Metamagical Themas

It seems to me that if the proper noun «Dennis Thatcher» can be a role, then anything can be.

To translate “I am going to the store”, for example, you might do something like:

20 For more information on AllNoun, see


(destination:store mover:me time:now) 21

So, is AllNoun a complete, functional language? Maybe not. Does it demonstrate a viable strategy for
eliminating verb phrases? Absolutely. There is clearly logical predication and role assignment going on, but no
concept of headedness; as such, AllNoun eschews the concept of linguistic predication entirely, and succeeds in
being verbless by all of Dixon’s and Aikhenvald’s criteria.

Role-marking Languages


Cho’ron & Gogido
Something like the AllNoun strategy of splitting up roles and marking each one separately has been
independently re-invented by many conlangers in many conlangs, even in cases where eliminating verbs was
not a primary design goal. In my own project Gogido, for example, which has a very clearly defined category of
verbs, you see a watered-down version: it is permissible to elide the verb in a clause if the set of prepositions or
cases used, and the pragmatic implications of the nominal arguments, make it obvious what the functional verb
should be. This verbless subset of the language turns out to cover a surprisingly large proportion of colloquial
usage. A more clear-cut example is provided by Virginia Keys’s Cho’ron, which is designed with no distinct
class of verbs and
intended to be impressionistic and ambiguous, so the greater specificity afforded by allowing arbitrary lexical
items to serve as roles or by allowing lexical verbs is considered unnecessary.

uses adpositions or case-marking suffixes to assign nominal roles. Cho’ron is

The apparently small change of using a closed class of adpositions or case markers, as opposed to allowing
any lexical item to act as a role-marker, does, however, have a significant effect on the arguments for how these
languages should be categorized. In particular, does the use of adpositions actually count as introducing verbs?
As stated earlier, I expect most readers would agree that prepositions are a different thing from verbs—yet,
there are natural languages in which the category of prepositions overlaps with, or is in fact entirely replaced, by
a subcategory of verbs. It is conceivable, therefore, that a language like Cho’ron could be analyzed as using
extensive serial verb constructions, with exclusively intransitive verbs and a lot of noun incorporation.

Adpositions and cases clearly assign thematic roles; that’s why we can contemplate using them to replace
verbs in the first place. But do they serve as phrasal heads, in the sense required by Gil’s definition of linguistic
predication, or Croft’s definition of headedness? In other words, do they control the denotation of their phrase?

Adpositions are generally considered to serve as syntactic heads (thus, we can speak of “prepositional
phrases” rather than “noun phrases modified by a preposition”), but morphological case markers are not. The
distinction, however, can become blurry and arbitrary in cases where the language’s morphophonology makes it
ambiguous whether an adjacent role-assigning element is in fact a separate adpositional word, or an affix, or a
clitic. Jeffrey Brown’s
interpret its case markers in any of those three ways!

τεμεηια (Temenia), in fact, is explicitly constructed such that a speaker can choose to
But even in clear-cut cases of adpositions serving as

21 Some readers may object that “mover” in this example is in fact derived from a verb. This, however, is a failure of the English gloss,
not of the AllNoun language itself; it is perfectly consistent to have, e.g., an underived noun for “one who moves”, despite the fact that
English happens to lack such a word itself. This is a frequent problem in analyzing verbless languages (and indeed in analyzing other
languages’ “exotic” categories in general), as the glossing language may simply lack vocabulary of an appropriate type to accurately
reflect the categories of the glossed language. In such cases it is important to remember that a language must be analyzed
in its own
, and

22 Personal communication with author.


based on how it translates.

syntactic heads, whether they embody the necessary semantic control is debatable. Does, for example, “in the
house” primarily express “in-ness”, or “house”, or some equal mixture of the two?

In the end, this seems like largely a matter of personal opinion. If a language creator using this strategy
claims that their role-marking forms are not linguistic predicates, but are rather purely attributive, or something
in between, I choose to trust their interpretation of their own language’s semantics! And in that case, with no
linguistic predication, there can be no verbs in the Dixonian or Aikhenvaldian senses. Choose a different
analysis, though, and it can just as easily go the other way.

Paonese 25
Paonese was conceived in outline by science-fiction author Jack Vance in his book

The Languages of Pao

Based on the scant evidence in that book, I “re”-constructed a more complete version of a “potential Paonese”.
Among the information about Paonese provided by Vance is a list of the parts of speech it contains—as well as
those it
, including verbs. Similar to Davin, described below, Paonese is also described as “presenting a
does not

picture of a situation rather than describing an act.” It was then my task to flesh out a language that conformed
with those and other typological claims.

The resulting language is another that uses a strategy very similar to that exemplified by AllNoun. Most

content words in Paonese consist of one or more nominal roots compounded with bound lexical morphemes
which Vance calls “suffixes of condition”, and which I have chosen to call “semiverbs”. Semiverbs are much
like incorporated postpositions or case markings; many of them encode adpositional and case-like concepts like
“agent”, “patient”, “location”, etc. Not all semiverbs, however, are so semantically empty; some of them encode
much more specific states, like “subject to attack”. Although they are a distinct syntactic class from Paonese
nouns, they can encode arbitrarily complex roles, much like in AllNoun. In effect, semiverbs are like chunks
broken off of the complete semantics of a normal verb—hence the name. Between them, all of the semiverb
components in a single clause combine to “present a picture” of all of the different components of a single
propositional concept.

If we treat these semiverbs as a sort of open class of complex case-marking suffixes (as Vance seems to
have intended), then the argument seems clear cut: there is no linguistic predication, only attribution, and thus it
is verbless by all of the criteria we have considered. On the other hand, however, the semantic breadth and
depth available in the class of semiverbs, compared to typical adpositions or case inventories, argues that this is
not so much a situation of simple case-marker suffixation, but rather of compounding of equal-status roots, in
which case the role-assigning component may actually be the head of the compound. The sceptical may thus be
justified in claiming that Paonese really consists exclusively of intransitive serial verb constructions with
obligatory universal noun incorporation. If, however, we turn to the closest approximation we have of
native-speaker intuition—namely, Vance’s descriptions and the opinion of the reconstructor (myself)—that’s
simply not the case. These complexes are fundamentally nominal in nature. Thus, there can be no linguistic
predication, and therefore no verbs.

25 For more information on the reconstruction of Paonese, see Logan Kearsley,
Fiat Lingua​, Nov. 2015, available at
Vance’s “The Languages of Pao”

Potential Paonese: A Reconstruction from Jack

Eliminating Lexical Verb Roots



, Duojjin , & Mundari

Marlowe Clark’s Dyrel and Clay Lafontaine’s Duojjin share another borderline approach—in a sense, they
verbs, but do not



In other words, there are verb phrases in the syntax, and there are single phonological & morphological
words which can act as heads of verb phrases on their own—but such words are only formed by inflection or
regular derivation of other parts of speech. There are no roots which are basically verbs. All of them are

So, on the one hand, there is definitely a morphosyntactic category which it is sensible to label “verb” in
both of these languages; but, on the other hand, is it a “part of speech”? Is there a lexical class of “verbs”, or just
an inflectional category? I am inclined to say “no, there is no lexical class of verbs”; but, given that some roots
will be more “prototypically” verby than others, Dixon and Aikhenvald almost certainly would identify a class
of verbs; the big remaining question is whether or not it should be a separate top-level classification, or merely
an inflectional

-class of nouns.

A similar approach can be seen in a monocategorial analysis of the natlang Mundari. In this case, there is no

morphological derivation of explicit verbs, but any basic lexical root can be inflected to serve as the main
predicate of a clause–and indeed, even multi-word phrases can be so inflected, with appropriate clitic strings!
When a full NP appears internal to your functional verb slot, it’s pretty clear that that is not a “verb”! Of course,
this does mean that the single common lexical category of content roots would be labelled “verbs” in an
Aikhenvaldian sense. As previously noted, I consider this something of a vacuous classification, but should you
choose to go with this approach, it may be safer to simply claim elimination of the noun/verb distinction as a
means of satisfying a larger proportion of potential critics, rather than making the stronger claim of having
eliminated verbs entirely.

Alternative Functional Divisions



Luiseño (
Luiseño is a Uto-Aztecan language of southern California. According to an analysis by linguist Susan
Steele, it has four formal lexical classes of content words, distinguished by their ability to take either, both, or
neither of the absolutive and possessive sets of affixes—and none of them are verbs.

Earlier analyses claim that the function of absolutive suffixes (a common feature of Uto-Aztecan languages)

in Luiseño is precisely to distinguish nouns from verbs. According to Steele, however, this distinction has no
predictive power in Luiseño grammar. I was not actually easily convinced of this myself. After all, all of the
words which can’t take absolutive or possessive affixes

take tense and aspect marking, and they all

There’s a grain of truth in every “myth”, or, Why the discussion of lexical classes in Mundari isn’t quite over yet

26 For more information on Dyrel, see
27 For more information on Duojjin, see
28 John Peterson,
Linguistic Typology,​ Vol. 9, No. 3 (Jan. 2005), pp. 391-405., available at
29 Susan Steele,
International Journal of American Linguistics​, Vol. 54, No. 1 (Jan. 1988), pp. 1-27, available at

Lexical Categories and the Luiseño Absolutive: Another Perspective on the Universality of «Noun» and «Verb»


semantically refer to events or states—the prototypical meanings for verbs. Sure, some other words can also
refer to events and states, but that’s no different from, say, English having a noun for “a run”; and sure, there
are other types of words that can take tense marking and serve as clausal predicates, but that could just be
zero-derivation, or else it means that the rule is a little more complicated than just “things that can’t take
absolutive marking are verbs”—it doesn’t mean that there aren’t verbs at all.

However, if there is a category of “verbs” in Luiseño, it should

something—identifying something as

a “verb” should tell you something about how it behaves that you could not deduce from its other properties.
For example, identifying something as a “verb” rather than a “noun” in English means it can act as the predicate
of a sentence, which is not deducible strictly from semantics, since English has action and state nominals, like
the aforementioned “a run”.

Steele’s argument proceeds as follows: Suppose that all of the event and state words that also can take
aspectual affixes in Luiseño are “verbs”, and the remainder of the content words, regardless of their 4-way
categorization, are “nouns”. Then, if those categories are to have any meaning, all “verbs” should all have some
consistent behavior—say, being capable of serving as the predicate of a sentence, that being the prototypical
function of a verb, and the identifying characteristic of a verb according to Croft. Similarly, all “nouns” should
behave the same way—say, by being capable of serving as an argument to a predicate.

In practice, however, there are several counterexamples:

1. All “verbs” can serve as predicates, but not all “verbs” can serve as predicates

. Many of
of a sentence

them are restricted to subordinate clauses.

2. Many Luiseño “nouns” can also serve as predicates in a sentence, in addition to functioning as


3. Many Luiseño “nouns” can’t actually serve as arguments to predicates—they can only appear as

predicates themselves . 30

Thus, we can conclude that simple categories of “verb” and “noun”, cutting across the 4-way distinction
in describing Luiseño. It simply

evidenced by the behavior of absolutive and possessive affixes, have no
cuts up its lexicon differently, into different parts of speech based on different core semantic features. Like
many other examples in this article, Luiseño almost certainly has verb phrases in its syntax, but no individual
words that can reasonably be classified as “verbs” by purely formal characteristics once semantic interference
has been taken into account.

Nevertheless, there is a class of words which are more “prototypical” in predicative roles than others (i.e.,

that cannot take either possessive or absolutive inflections). And in that sense, both Dixon and Aikhenvald
almost certainly recognize a class of verbs in Luiseño.

Riau Indonesian (
Riau Indonesian is what linguist David Gil has identified as one of the closest known real-world examples

) 31

of a theoretical model called Isolating-Monocategorial-Associational (IMA) language. Quote:

30 A similar subcategorization can be seen in English adjectives, as there exists a distinct class of obligate-predicate adjectives; e.g.,
“glad”. One can say “I am glad”, but an attributive usage like *“the glad boy” is ungrammatical in modern usage.
31 For more information on this analysis of Riau Indonesian, see David Gil,
Language Complexity as an Evolving Variable​, Geoffrey
University Press, 2009, available at

, in
How Much Grammar Does It Take to Sail a Boat

Sampson, David Gil, and Peter Trudgill (eds.) Vol. 13. Oxford


Examination of any naturalistic text in colloquial Malay/Indonesian reveals occasional accidental
stretches of Pure IMA Language. Examination of such Pure IMA fragments suggests that they,
alone, suffice to fulfill all of the important functions associated with the language as a whole. In
other words, the non-IMA embellishments do not add to the expressive power of the language, or
increase its functionality in any obvious way. Thus, contemporary Relative IMA Languages such
as colloquial Malay/Indonesian show that Pure IMA Language alone is enough not just to sail a
boat, but to support most aspects of modern human civilization, culture, and technology.

So, what is IMA language? It is

a) Morphologically isolating—there is no word-internal structure.
b) Syntactically monocategorial—there are no distinct parts of speech or other syntactic categories.
c) Semantically associational—there are no construction-specific rules for semantic interpretation.

Compositional meaning in an IMA sentence is constructed entirely by means of the generic “Association
Operator”. In simple terms, this means that, given a sentence of multiple words, the meaning of the sentence is
“something logically related to, or associated with, all of those words”. All other forms of syntactic composition
are just specializations of that maximally-broad rule.

This would seem to produce

out, in most cases, context is sufficient to determine which specific meaning is intended, which includes
determining the underlying logical predications—a critical component for being a functioning language at all!
Gil asks us to consider the simple sentence

vague and ambiguous sentences, and indeed it does—but it turns

Ayam makan

Chicken eat.

These two words are underspecified for tense, aspect, mood, number, and even grammatical role

assignment! The chicken could be an agent, a patient, an experiencer, or any other kind of participant; “eat”
could specify an activity, a place, a reason, a time, etc. Consequently, the sentence as a whole could be
interpreted in such varied ways as “The chicken is eating”, “the chickens which were eaten”, “because I ate the
chicken”, “when the chicken ate”, and so forth. Yet, this is a perfectly grammatical and complete sentence, not
telegraphic or otherwise stylistically marked in any way—and the range of
different if, say, you yell this angrily at your roommate while looking through your shared fridge, give it a
questioning tone while passing the meat freezer at the supermarket with your spouse, or casually mention it
while pointing at a chicken pecking the ground.

meanings is quite

Actual Riau Indonesian does contain numerous features that sully the theoretical perfection of its IMA

nature. It does have a few derivational affixes and other morphological processes like compounding and
reduplication. It also has a heterogenous class of function words, which serve to explicate the syntax a little
more completely than pure association would do, thus making it not purely monocategorial. And it has some
more additional compositional rules relating to specific lexical items and syntactic structures, so it’s not quite
purely associational, either. But, despite all of that, Indonesians themselves have been known to claim that their
own language “has no grammar”, and its content words are still not reliably distinguished into any distinct
lexical classes—i.e., no part-of-speech called “verbs”.

As it turns out, those various impurities mean that Riau Indonesian as used in the real world does have
linguistic predication—and lexically-specified predication at that, unlike Kēlen—and so clearly does have some
verbs. The IMA core, however, does not, and this could easily serve as the basis for a purer verbless conlang.

Fith 32
Jeffrey Henning’s Fith? Yes.

“But Fith definitely has verbs!” you say. “It says so in the list of ‘Parts of Speech’ on the archived

Langmaker site!”

You’re quite right—it does. But just because Jeffrey

there were verbs, doesn’t mean he was

. It is

not unheard of for conlangers to misanalyze and misunderstand the nature of their own creations, and I claim
that this is a case of aprioristic philosophy taken too far.

Unfortunately, Fith is underspecified—there is a dearth of textual examples to analyze, and many important

questions go unaddressed in the existing documentation, some key ones being “how are the concepts of
complement and relative clauses expressed?”

But based only on the information currently available, it is perfectly possible to analyze Fith in terms of
only two parts of speech: modifiers (which correspond to macros or compiler words in FORTH, for the CS
geeks among us), and combinators (which correspond to functions in the combinator calculus ), where
combinators encompass all of the content words and stack conjunctions—everything that isn’t a modifier.


The only thing that reliably differentiates “verbs” in Fith from other parts of speech is a purely semantic
feature: that they produce propositions, rather than nominals. Yet, as previously discussed, semantic features are
an extremely unreliable basis for identifying formal parts of speech: consider that, in English, “run” always
refers to an event, but it can be a noun or a verb; and “red” always refers to a quality, but it can be a noun or an
adjective. This does have some syntactic consequences—after all, it is unlikely that you can make a
grammatical sentence in Fith that does not contain at least one “verb” (although, perhaps we can—it is, after all,
underspecified). But, if that syntactic behavior is
motivation to posit a new lexical category. You just don’t need it to explain how the language works.
because they are ungrammatical, but simply because they don’t
“Verb”less sentences are then excluded

make sense—much like the famous English example “Colorless green ideas sleep furiously”, which is in all
ways grammatically licit, but semantically senseless.

from semantics alone, there is little
completely predictable

There is the possibility of syntactic evidence that could argue for the existence of a separate class of Fith
verbs—if propositions cannot be used as arguments to other verbs, like nominals can, then we might have to
posit a purely formal syntactic feature to explain that, which would require carving out a new part of speech. If
so, there would need to be some other method, currently unspecified, of expressing the ideas behind
complement clauses. Additionally, it is telling that Fith, like, e.g., Armenian, but unlike English, has distinct
conjunctions for joining propositions vs. joining noun phrases. Again, however, this behavior could be
accounted for on purely semantic grounds, much like English uses different structures for “tell a story” vs. “talk
about an apple” even though “story” and “apple” both belong to the same part of speech—“tell an apple” just
doesn’t make semantic sense. Furthermore, the fact that propositions
undifferentiated class of adjectives/adverbs suggests that they should be usable as arguments of other verbs as

be arguments to a single,

Thus, I claim that the category of “verb” has little explanatory power in Fith—and thus, does not in fact
exist. Jeffrey Henning may have invented a verbless language
, and one which works differently
by accident

from any of our previous examples. In fact, while the result would almost certainly not be human-speakable,
Fith embodies the only design principle I am aware of which could result in a fully functional
-ary combinators (i.e.,
language having only one undifferentiated lexical class: that of treating words as

and unambiguous

32 For more information on Fith, see
33 For an introduction to combinator calculus, see


, and


combinators which can take any number,
, of arguments), where each word effectively has its own syntactic

rules completely lexically specified.

So, how does Fith stack up in terms of Dixon and Aikhenvald? Well, to start with, there is some definite
, which makes it rather difficult to identify phrasal heads. Nevertheless, if we

difficulty in identifying
consider Fith to have discontinuous syntactic constituents, there are words which, by a quirk of lexical
semantics, happen to assign roles and head their own phrases (what Jeffrey called verbs) and other words which
happen to assign roles but
noted, I am loath to identify verbs when the distributional characteristics of that category are
semantics in this language, but technically they do satisfy the criteria for acting as heads of linguistic predicates.
Thus, in Dixonian and Aikhenvaldian terms, Fith probably

head their own phrases (what Jeffrey reasonably called adjectives). As already

a matter of

have verbs after all.

Davin 34

Davin, by Zie Weaver, is construed as a relatively thin layer of pronounceable language over a core of
formal semantics defined by set theory. If you thought talking about combinator calculus and computer science
in Fith was confusing, Davin really jumps in the deep end.

Davin completely side-steps the typical loglang problem of distinguishing propositions from

obviousness of the need for such structures, it doesn’t actually bother with

. How in the world does it work then? According to Zie, by bringing your attention
at all

objects—despite the
asserting propositions
to a particular idea, which it describes in terms of sets. In a way, Davin grabs hold of the power of language to
manipulate what goes on in someone else’s mind, and brings
through the land of logical propositions. Ideas that we would express in English as nouns are expressed in
Davin as sets containing all of the objects that conform to the definition of that noun. Ideas that we would
express as verbs are expressed in Davin as sets containing all of the events that might conform to the definition
of that verb. One single part of speech, called “owpys”, encodes everything that is covered in English by such
diverse words as nouns, verbs, adjectives, and prepositions. Speaking or writing in Davin consists of using these
basic building blocks, along with a variety of types of function words which manipulate sets in particular ways,
to build a set containing exactly the idea or ideas you wish to express.

to the forefront, rather than taking a detour

This is our first dive into true alienness. It’s what Fith could

be. All of the familiar categories from

any natural language are thrown right out the window, and replaced with completely new ones, and a
completely new way of looking at the world and at communication. And, to the best of my understanding, there
is nothing there that looks like linguistic predication as defined by Gil—thus, I conclude that Davin most
probably is verbless in both the Dixonian and Aikhenvaldian senses.

34 For more information on Davin, see

35 One may at first be tempted to think that this refutes the assumption that to be a language at all, a language must be able to express
logical predication, as Davin does not appear to do so. Note, however, that while Davin does not describe predicate relations
explicitly, it is still fully capable of communicating them; it merely does so in reverse, by starting with the universal set of all possible
predications, and providing you with rules for selecting specific ones to pay attention to.




Unker Non-Linear Writing System, or UNLWS, is a purely-graphical two-dimensional language by Alex
Fink and Sai. The majority of words (or “glyphs”) in UNLWS correspond to logical predicates of arbitrary
arity (i.e., how many arguments they take); each predicate glyph contains attachment points for each of its
argument places, and lines connecting the attachment points on different predicates indicate co-reference—i.e.,
that the same entity is the argument in both cases. There are some syntactically distinguished closed categories
of glyphs, such as articles, which behave like logical quantifiers rather than predicates; “line decorations” which
modify the relations encoded in co-reference lines, but have no explicit attachment points themselves; and
“pronouns” which serve primarily as layout devices, to stand in when drawing an explicit line between binding
points would be unergonomic. Thus, UNLWS is not completely monocategorial—but like Luiseño, it divides
things up differently. There are no particularly noun-like words which simply refer to an entity—entities are
described by all of the predicates that they are arguments to. Similarly, there are no particularly verb-like words,
or even verb phrases, or any kind of predication in the linguistic sense. No logical predicate is syntactically
more important than another ; nothing obviously forms the core of a sentence or the head of a phrase. Without
headedness, there can be no linguistic predication, so this is a clear-cut example of another language that is
verbless in both Dixonian and Aikhenvaldian terms.


Much like Davin is a thin linguistic layer over set theory, UNLWS is, at its core, a (somewhat thicker)

graphemic layer over a particular model of predicate calculus. This is a branch of logic used in formal
semantics, in which a sentence like “I pet the dog” might be translated into a formula something like this:

me(a) & dog(b) & pet(a,b)

in which there are two entities being referred to (represented by the variables ‘a’ and ‘b’), and ‘a’ is asserted to
satisfy the predicate ‘me()’ (i.e., it is me), while ‘b’ is asserted to satisfy the predicate ‘dog()’ (i.e., it is a dog),
and the pair (a,b) is asserted to satisfy the predicate ‘pet’ (i.e., entity ‘a’, in addition to being “me”, is the agent
of ‘pet’, and entity ‘b’, in addition to being a dog, is the patient of ‘pet’). You’ll notice that, in the formula, a
pronoun, a noun, and a verb are all translated into the same kind of mathematical thing—a predicate, with some
number of arguments. Adpositions, adverbs, quantifiers—just about any part of speech you can think of—can
all be represented the same way. In some cases it becomes rather convoluted—mathematicians and formal
semanticists have other tools that can be added in besides just predicates to make things simpler—but it’s
, and results in completely unambiguous formulae. In fact, finding a way to represent unambiguous

formulae of predicate calculus of arbitrary complexity in an ergonomic, speakable way has been called “the
holy grail of loglanging” ; Lojban is the most well-known attempt at this. Unfortunately, the number of
arbitrary variable names you need to keep track of quickly becomes impractically large when trying to translate
complex, real-world discourses—which is one reason why UNLWS tends to dispense with the explicit variables
and just connect everything with lines.


WSL 40
Similar to Davin and UNLWS, my own conlang Wjerih Sarak Lezu (WSL) puts a thin layer of speakable
language over a particular model of formal semantics, in this case based on predicate calculus. WSL represents


36 For more information on UNLWS, see
37 “Logical” is meant not in the sense that Mr. Spock would approve of them, but in the sense that they correspond to objects in a
formal system.
38 However, there are mechanisms to indicate greater or lesser
40 For more information on WSL, see

salience, such as bolding.


a conscious attempt to start with the uniform class of underlying predicates, and divide them up between
syntactic categories in the surface language in a novel way, free of natural language biases.

While it is based on predicate calculus like UNLWS, WSL uses a slightly different formulation of predicate
calculus semantics than UNLWS does. First, all predicates that represent events or states are augmented with an
event argument
, which represents the state or event itself. This makes it possible to then assert other predicates

about the same event, which is useful for things like expressing adverbs. Building on the example given in the
section on UNLWS,

“I pet the dog

” might be translated into a formula like

me(a) & dog(b) & pet(e,a,b) & soft(e)

where an extra event argument ‘e’ has been added to ‘pet()’, representing the event of petting itself, which we
can then say additional things about. (The system that results from this augmentation is known is known as
“Davidsonian” semantics.) Second, all predicates are
two arguments. In the example above, it is necessary to simply memorize which role applies to which argument
of ‘pet()’ in order (also the case in Lojban); for complex predicates, this can become quite unwieldy. To avoid
that, we can introduce special predicates that assert that some other entity has a particular role in some event;
then, these special role-marking predicates will have two arguments, while all of our original content-predicates
require only one. Thus, our example sentence can be reformulated as:

; i.e., they are reformulated to require at most

me(a) & dog(b) & pet(e) & soft(e) & agent(e,a) & patient(e,b)

Proto-WSL starts with IMA language as evidenced in Riau Indonesian, and adds a small bit of extra syntax
on top. Rather than interpreting all the words in a sentence together, Proto-WSL formally divides them up into
phrases, where each phrase describes a single argument, and the whole sentence shares one event argument
between all the phrases. Our example sentence in Proto-WSL syntax might then look something like

Me agent, dog patient, pet soft is.

that event. The astute reader will note that Proto-WSL cheats a little bit, just like
to be

where ‘is’ is a special binary predicate which says that the role played by this phrase’s argument in the
sentence’s event is
AllNoun, in requiring punctuation to elucidate its syntactic structure; the amount required, however, is quite
minimal, and in speech this can be handled by intonation to mark phrase boundaries, with no special function
words required . Note that, because all words are formally of the same type, phrases can be re-arranged, and
elements within a phrase can be re-arranged, with no change in the meaning : 42


Soft is pet, agent me, patient dog.

The core syntax of modern WSL is then obtained by taking all of the binary predicates and putting them into
come at the end of a phrase (as they did in the first example), and

one part of speech (called Roles), which
putting all of the unary predicates (those that only take one semantic argument) into another part of speech
(called Nouns) which
these as actual distinct parts of speech, as opposed to merely semantic groupings, and eliminates the need to
cheat by using punctuation or intonation to elucidate the syntactic structure.

come before Roles. Their differing syntactic distribution simultaneously identifies

Now, there is clearly no distinct class of verbs yet, because things like “run”, “speak”, “bake”, “kick”, etc.,
which all describe events, are nevertheless unary predicates grouped together and behaving the same as things
like “me”, “dog”, or “wristwatch”. Additionally, however, many things that you might think of as “nouns” are
not in the class of WSL Nouns—kinship terms like “father”, “spouse”, etc., which are all defined as a

41 In fairness, however, I must admit that suprasegmentals can be morphemes, too! Determining whether to count them as clitics or
affixes, or whether their use in this case constitutes a new class of word (violating the “monocategorial” constraint) or a form of
inflection (violating the “isolating” constraint) is, however, left as an exercise for the reader.
42 Changing word order may still affect pragmatics, but not the


entities, not a property of one, are semantically binary predicates, and classed as

relationship between
Roles. This means that the shared argument of a WSL sentence is
necessarily a semantic event! Sometimes,

it’s a “father”—or any other entity that you want to describe in terms of its relations to things identified by other
phrases. (Interestingly, while many natural languages recognize this same binary nature of kinship terms by
making them obligatorily or inalienably possessed, Oneida in particular actually goes so far as to encode
kinship terms
explicit syntactic head of a clause, but these words cannot be verbs because they do not assign roles; rather they
specify which argument of the clause controls the denotation of the clause as a whole, whether it is the entire
proposition (creating a declarative or a question), the event (creating the equivalent of a nominalized
complement clause), or something else (creating the equivalent of a relative clause).

! ) Additionally, a third part of speech—the “Projector”—is introduced to serve as the
as verbs

Assuming, then, that Roles are not in fact serial verbs (for which, as in the cases of Cho’ron and Paonese,

we can appeal to creator intuition to say “no, they are not; they are syntactic heads, but their phrases are
nominal in nature, not predicative”), we can see a clear separation between the mechanisms of thematic role
assignment and clause-level headedness. Despite the fact that WSL can distinguish one phrase as specifying the
“event” (or, more generally, the “shared argument”), that phrase is not syntactically or semantically privileged.
As there is thus no single phrase which both assigns roles and contains or otherwise controls the other argument
phrases in a sentence, or the meaning of the clause as a whole, this indicates that WSL not only lacks verbs, but
lacks any syntactic category which could reasonably be identified as a “verb phrase” according to any of the
definitions we have considered.

Creating Novel Categories

While eliminating classes can sometimes seem hard, and in certain cases even impossible, it is

nevertheless common enough to lead Alex Fink to state on the CONLANG-L mailing list:

This is virtually one of the standard things to do, by now, if one wants to make a strange language. Having
only one predicate-y word class, erasing all the distinctions between nouns & verbs & adjectives.


This kicked off an extensive discussion of how to create a strange language by adding

categories in

addition to the traditional ones.

Davin and WSL provide some examples of how to create new kinds of categories

the traditional
instead of

ones: First, create a model of formal semantics. Then, pick some features of your model to make your
morphosyntax sensitive to, which happen to not match the features used by other languages. For Davin, this is
(in broad strokes) whether a word represents a set or an operator on sets. For WSL, this is the arity of the
underlying predicate, rather than what kind of thing it actually refers to. If one pretends for a moment that
Luiseño were a conlang, it might be a combination of whether or not something is inherently part of a system,
whether or not it can be possessed or manufactured, and whether or not it “makes sense” to attach a
grammatical aspect to it.

If, however, you are less than inclined to delve into formal semantics and logic, you will need a different

approach. For that, we will consider a few additional case studies.

Koenig, Jean-Pierre, and Karin Michelson,

Linguistics​, vol. 76, no. 2, 2010, pp. 169–205, available at

Argument Structure Of Oneida Kinship Terms
International Journal of American 


Oneida (



Oneida is an Iroquoian language spoken in New York, Wisconsin, and Ontario. As mentioned above, it
employs the unusual tactic of encoding kinship terms as verbs. Or at least, approximately so. Most kinship
terms take pronominal agreement prefixes similar or identical to those used for the subject and object of
transitive verbs. However, they take diminutive suffixes like nouns, and use nominal negation patterns, and
fully inflected kinship terms can be used as nominals in sentences, as in
(>1sg-father-child​) which can be translated as the phrase “my father”, rather than the clause “he is
my father”. But then they fail to take possessive marking typical of nouns, and they
inflections typical of verbs, as in
daughter”, and they can only undergo incorporation into other verbs if they are first explicitly nominalized. So,
they are kind of nouns, and kind of verbs at the same time. Perhaps they are special verbs which automatically
form internally-headed relative clauses; or perhaps they are a third thing, a distinct category of “kinship terms”
which form a new class, neither noun nor verb, though sharing some properties with both. It would not take too
many tweaks by a conlanger to really cement the third-thing analysis.

3zoic.nonsg.pat-reflexive-parent-child​) “mother and

take reflexive




Elkarîl, by Mark Rosenfelder, has two different kinds of verbs. Indeed, if one is a dedicated splitter, one
could argue that it’s actually another technically-verbless language, on the basis that it has two different lexical
categories that divide up the functional load of verbs, and thus neither one is actually “just a verb”. In this case,
however, that argument seems weak even to me, and Mark makes no such argument himself

Elkarîl’s verby words are divided into

physical verbs

distinction, which would not be enough by itself to justify a difference in formal classification, these two groups
are treated completely differently in Elkarîl’s morphosyntax—they appear in different syntactic positions in a
clause, and they require different sets of cases for their arguments. Furthermore, most Elkarîl sentences actually
categories simultaneously, with mental verbs encoding the purpose behind an action described by a

physical verb, thus cementing the argument that they actually perform different and complementary functions in
Elkarîl grammar. These functions can be demonstrated by the following set of related example sentences, where
physical verbs are shown in red and mental verbs are shown in blue:

. Apart from the obvious semantic
mental verbs

gguk âktuphuq char.

pierced heart swamp-thing’s sword
pierced heart swamp-thing’s sword murder fighter
The sword pierced the swamp-thing’s heart. The fighter killed the swamp-thing with his sword.

gguk âktuphuq char




murder fighter
The fighter intends to kill

Note that a clause can contain either a physical verb, or a mental verb,

, with no coordinating
or both

structure; all three possibilities are shown. Where both types of verbs occur, with mental verbs obligatorily
follow any physical verbs and the arguments thereof.

This is similar to, but slightly different from, the

Oneida approach: rather than taking take two existing parts

of speech and making a third by squishing together bits and pieces of both, instead take one existing part of
speech and find a way to split it into two.

Koenig, Jean-Pierre, and Karin Michelson,

Linguistics​, vol. 76, no. 2, 2010, pp. 169–205, available at
46 For more information on Elkarîl, see


Argument Structure Of Oneida Kinship Terms
International Journal of American 




Sheli, by David Peterson, is supposed to be “South-east Asian” in style; therefore, of course it must have

classifiers! On the subject of classifiers, however, David says:

For the longest time, I didn’t understand what they were for or why they existed. I still don’t know.
What I do know, though, is why Sheli has noun classifiers: Because I want it to.

As a result, the words which the Sheli documentation calls “classifiers” do not work in quite the same way

as, say Japanese classifiers. In Sheli, classifiers are used to mark all modifier constructions—every time an
adjective of any kind appears with a noun, it must be connected with an appropriate classifier. That’s a fairly
reasonable generalization of how counting-classifiers work in Chinese and Japanese, and indeed many
languages have special morphemes that serve to attach modifiers to heads; but, I am not aware of any other
language that groups those functions together. And it goes further: Sheli classifiers are not only used with
nouns, but with verbs as well; in this way, they are somewhat like ASL classifiers , although their usage is still
slightly different: they are used to attach certain adverbial meanings, and in some cases to mark nominalization.


Now, a lumper might be inclined to say “that’s still a classifier, just like Chinese and Japanese have; it just

has a comparatively expanded set of uses, just like the usage of the ‘dative’ case is different in French and
German. It’s the same kind of thing, just being used differently in different languages.” This is in fact the view
David himself holds:

I think they’re enough like classifiers to call them classifiers; they just have functions that go beyond
classifiers and bleed into other categories. They’re a bit more like noun classes (there are few instances when
you can get away with not using one), except that you can use multiple classifiers with the same noun.

How it’s used, however, is the greatest part of the definition of a category! And in my opinion, although
Sheli classifiers subsume the functions of Chinese and Japanese classifiers, their range of functions—the things
they are used for—is sufficiently different to qualify as a different kind of thing.

Siwa 49

Siwa, by Étienne Ljóni Poisson, has a category not of independent words, but of verbal infixes, called
“absolutive descriptives”. They are comparable to Japanese counters (or Sheli classifiers) in their semantic
range, conveying information about shape or other general physical qualities, but serve a completely different
syntactic function.

Absolutive descriptives attach to a verb root, and provide information about the absolutive argument

(subject of an intransitive, object of a transitive), even though Siwa is otherwise
language. This allows a Siwa speaker to refer to an absolutive indirectly, purely by what class it belongs to, or
anaphorically. In the latter usage, descriptives often occur after the object of description has already been
introduced in the discourse, obviating the need to mention it again or use an explicit pronoun.

an ergative/absolutive

Although Siwa absolutive descriptives happen not to be realized as independent words, and thus not as their

. English (among other languages) has,
own part of speech, there is no reason why something similar

for example, a subset of adjectives that are primarily used predicatively, and rarely attributively, like “glad”;

47 For more information on Sheli, see
49 Étienne Ljóni Poisson,

Fiat Lingua​, June 2016, available at
Absolutive Descriptives



“I’m glad” is perfectly acceptable (at least in my dialect), but “the glad person” sounds very strange. Such
adjectives are not usually considered to form a distinct part of speech in English, but there’s no reason why they
couldn’t be in some language. So, why not a distinct class of modifiers that can only appear in some other
particular morphosyntactic context, like describing absolutives? Give them their own unique inflectional
paradigm, or special derivational processes for creating new ones, and the argument that they are not the same
thing as regular adjectives is complete.


Verbless languages vary extraordinarily widely in their structure—as much as conlangs in general. While

they are defined as a group by what they lack, what they
further exploration. As a language creator, I would thus suggest that, should you wish to design a verbless
language, don’t
to inform a larger

creative process. For all we know, the samples presented here may barely scratch the surface of the depth of
interesting structures that may be available to verbless languages.

think of that as a goal in and of itself; rather, think of it as a

to compensate for that lack is a wide field ripe for

Additionally, whether or not a language qualifies as verbless depends critically on your choice of definition

for “verb”. As a potential critic of others’ conlangs, make sure that you can identify what definition is meant
when encountering a language claimed as “verbless”, so that you can identify whether or not those goals are
actually met. As a creator wishing to design a verbless language, make sure you know what definition you have
in mind, and how to evaluate it, so that you will know if you have succeeded—but should a critic identify verbs
in your language after all, be open to checking whether their analyses really adds any descriptive value to your
project or not. If it does, maybe your language has verbs after all, but that doesn’t mean you have failed; you
may still have created a unique and interesting language as a result of the attempt.

How to Not Verb image

Descargar PDF

(Visitado 1 veces, 1 visitas hoy)