Following Peirce, the general is a continuum and no set of individuals can fully characterize the general, and yet generals are real. We are abstracting an initual Card Model from the Ruby on Rails implementation of the Wiki content platform Decko (ref decko.org). It is a particularly flexible example of a general, or category, that can include all web content; a master set of content patterns. This descriptive purpose we will apply mathematics from category (CT) and type (TT) theory in order to make use of the tools and methods of these disciplines. Our goal is no less that the formal description of hyperlinked content in general, and to begin a larger mission of better supporting the growing global networks of content. The real work is the production and curation of the data as forms of content. This is already what we have been doing since before the dawn of the Internet as we now know it.
Decko, Mediawiki (ref), Twitter, Wordpress content or even an arbitrary relational database can be mapped onto the types along with their transformations. Inspired by Ward Cunnunham's exploratory work on FedWiki (ref) and the open source new media producer's need to share content not just as static creative works and data sources, but as growing dynamic resources of many interlinked communities. One goal is a method in natural languages extended by some shared language to specify any object in this broad networked information space. These include but are not limited to wiki, microblog, mediastream, database and any data form that can link, transform and translate within a shared ontology of named units and types of content in containers that will be defined as namespaces below.
To serve a goal to explore new ways to describe systems designs and architectures we can also turn to new and recent works in type theory and related functional programming languages provide a common basis in language and method. This push toward formalism may seem at odds with the idea of the continuum, the incomplete general that as in mathematics, logic and cybernetics we can describe forms and structures that create open spaces of expressiveness and inquiries that will not behave to be (fully) captured by any complet set of rules. This is to say that their structure is drawn from unbounded spaces of possible structures that is independent and beyond their actual formal expression in any particular syntax or grammar. With a method of creating an abstract system model independent of syntax such that when these descriptions are then linked to particular syntaxes, the particulars can be completely accounted for with tools and automation.
The hard problem of this task is that there are so many great ideas and projects that have similar but often crosscutting goals to create unity around one or more approaches without knowing of what the others are doing. This problem was known to the ancients and comes down through cultures as the Tower of Babel narrative. Any attempt at one common language to rule over all is doomed to fail, but a multi-lingual method that develops translations and even transformation of objects can be successful. Most of the common standards needed are already emerged as emerged and extendable formal syntaxes.
What is also being presented for deeper consideration is the relationship of formal languages and natural language. Even beyond this we are open to extended categories of signs and will develop a bit the potential role of icons and indexes that can also support semantic relations. The categorical arrows and circles or dots in graph representations will gain expressive power by relating them to Peirce's categories and sign elements. The arrow with its beginning, end and a connecting line is thirdness, a functor of three places can take you into the category of relating arrows. A dot with no arrows is iconic, the dot/circle is in a sense the icon with no structure; it refers to anything. Only when you have some transformations through arrows can the dot aquire any form at all. This can and will be developed further below and in diagramatic work to support these texts.
Mathematicians don't write papers only in math symbols and formulas, those are short-hands and sometimes aids to thinking collaboratively about problems. The guts of any proof is not the formal steps, but the understanding of what they mean. What we need as designers are tools that help us view and manipulate all of the work products of all the design teams of complex projects, and we are limited to only the sets of tools and languages that are actually available, but there is no place within formalisms that is generalized for all of the particulars that are actually within the work practices of design engineers.
By semiotic methods linked to CT/TT (ref DI, Gangle) we will give tool and systems designers a new space of coevolutionary development. We can fight the Babel problem by establishing isomorphism and expressing them in powerful tools that can bridge the confusion of many domains and jargons through the methods described here. A central part of the founding works of category theory is the abstraction of the equality relation. What does it mean for two mathematical objects, or two quantum states to be equal or equivalent, and is there a difference in kinds of equality. Equality and identity are now ever more deeply tied to isomorphism, whether two mathematical systems or objects can be exactly transformed into one another. Preserving relations, that's the key part, in math and abstraction, all in founded on relations. All permanent discovered truths are in and within the systems of relations that they relate.
When we design, architect and build information processing and intelligence augmenting tools in formal languages, abstract relations are made concrete. Taken from the infinite spaces of possible Turing Machines with potentially infinite binary tapes, we enact and embody many connectable corners in actual digital machines and stores connected in global fiber and radio links, but that still exponentially growing actuality will be ever dwarfed by the Surreal Number (ref Conway) orders they are drawn from.
Most important is the way those structures are dual to the world in a way much like our own minds. As we realize both that our brain constructs the world our mind sees from programs drawn from experence encoded at all time scales in our genetic, cultural and individual histories. Realize that we learn collectively at all of these scales in the spacetime contexts that are active and available to us at the point of action; now!
This layer of description is the bridge from the design context to any systems solutions. It is where the objects and signs are the common sense objects that we all encounter in different ways. Our examples are drawn from information networks and systems, which have a particularly complex relationship with collective intelligibility and sense making. At the surface they are the ordinary objects of grammatical natural language, but the details are shot through with specilization on jargons that are specific in reference within a small corner of the knowledge space. Which is to say that much knowledge is highly specialized, so we need also to bridge from ordinary language and create new commons sense intuitions grounded in the best integration of the best of all the specialties.
Modern people need to be brought up with a start at understanding the new contexts of tools and methods whose scope and usefulness are open. They will have to help those who got there earlier, and will have many opportunities to achieve recognition for good work, great work for those up to that level. Just because you have control of each little part of the system, doesn't mean you know what will happen when you build it. Sometimes the bridge falls down, the system crashes and needs rebooting. Then we learn how to design and build to expect failures, to test for easy errors before full systems tests. Rockets sometimes crash, some failures have a very high cost.
We are in the space of open social media where content is semi-permanantely published into different spaces that we will call namespaces in this text. This term will have specific meaning in examples, but we will define it generally in type theoretic terms. That will be the semantics of our bridge building such that type theory descriptions can link directly to different domains for modelling compenents and subsystems in design/engineering analysis within the specialties invoked by the specific domains.
There are already a million and one proposed universal solutions, and we will not propose another. What we propose is a common language for talking and thinking collectively about and with information systems. We want a truly high level language, natural language. American English for this author, but our language must allow for accurate translations to and from any language. We note here that languages, natural and formal, are also related continua as are the iconic characters now exhaustively cataloged in international coding standards (ref current internet character standards). By the continuum hypothesis, this international character set is not complete, but it is potentially complete as we can add new characters to evolve it if we need to add sets for the space aliiens we might meet in fact or fiction. In the worlds of ideas, anything we can imagine becomes potential fact of letters typed on a page on an antique typewriter, or these bits flowing into my computer and saved by server code partly written by me.
We need a foundation in applied math, particularly around computational theory flexible enough to encompass quantum computing even as its foundations are not yet complete. What is needed is linkages between math and physical theory via information theory and computation, and triadic semiotics is the linkage layer we need to estaplish. In doing so we must untangle more of the structure of the small numbers, the first three encompassing the rest via semiotics, the growth of signs. We can then say that dynamics is the action of signs, and demonstrate their active structure.
We don't expect that Peirce's architectonic logic and metaphysic of sign is complete, but it is an essential guide as his logic drives us forward from the categories to the signs and how they act, flow and connect. It will be possible to translate any archetypal system of ancient coded wisdom into signs, but we cannot know exactly how these signs and categories operated within the cultures and minds that created them. We invite holders of wisdom traditions to link there concepts and categories to this basic continuum of signs and sign actions such that they can become valuable informational artifacts of our collective histories. As these systems were created in contemporary minds, and come to us from our collective heritage as they do, we are now freed to create and recreate these traditions as in pleases us now and in the future. We owe it to ourselves and our future to passionately preserve every bit of it. Not as superstition to stop thinking but as mandalas of the opening of our minds to past and future.
The reference to continuum in Card systems is a reference to Peircean thirdness, the space of law and rule. The forfront of theoretical physics is beginning to take cues from information sciences and so if there is a grounding of these theories in signs and semiotics, ideally an isomorphism, we might suggest that Peirce already had the solution. Peirce's continuum is the actual rules and habits in play with the local sign flows, so if the rules are set in the beginning, Peirce could be right that they are habits of an intelligence that is the mathimatical/physical laws as we find them with the methods of science.
To describe information systems and languages, we don't need to explain how our systems emerge from random bumping and relating of countless atoms and molecules without any plan to guide the way. Information systems are designed, not grown and evolved. On the other hand, the need for and description of mechanical recipes for computation arises in a long history of sign use and the spontaneous emergence of human language in all its many forms. In a sense computing itself emerges from symbolic language in a way that is deeply analogous to the emergence of the languages of life in the signed languages of DNA/RNA codons that record a digital description that is absolutely necessary to be living systems. The metabolic structures that manage the free energy cascade driven by the diurnal pumping against 5000+ of the sun to the 3K background at night.
In the wild, signs do not always neatly separate into punctuation and speech markers of tone and intonation. In fact, syntactic markers and even vowels were not present in may early writing systems. On the flip side, information systems and protocols are long on syntax and formal patterns that make exact and repeatable coding and decoding possible. For some recently designed coded representations, the specified formalisms involved can be so complete that there can be little ambiguity of the syntax and the mapping from linear codes and texts to a graph network representing the grammatic structure. Natural language is rarely complete in this way, but the structures are there even if they are built of semiosis, internal interpretive processes of the mind though not exactly parallel to the analogous processes in formal language use.
We will refer to specific syntax and relate semantics to syntax as working examples, reference implementations to help bootstrap the work with foundations. Philosophically, we recognize the need for some structures before reasoning can even begin, and remaining faithful to the doctrines of signs and the pragmatic maxims will serve us well in a world of sign systems that are ultimately grounded in loops of semiotic self-reference. It's signs all the way down. Kant's a priori loses all grounding because what is prior is just another big sign that must be taken as given to even start. After we have complex models of thinking, cognitive behaviors, we can speculate as to the a priori foundations that might produce it, but the reasoning and observations are also all a postiori constructions. We've just forgotten who the foundations are constructed by and for, then from this false certainty make unreasonable claims of necessary knowledge. This criticism goes as strong against the perveyors of scientism as the ones who sell spiritualism or one flavor of God or gods as ontology. Peirce and those who are expanding his work are just more clearly pointing out that there is no real difference in the logical foundations of one ontology or another. On what then do we ground our search for meaning? I see no better guide than some form of a pragmatic maxim, a standard that future inquiry may update as needed.
The message for design and architecture of future systems and cutural artifacts is that we are not limited by the ideas of the past. Yes, the only reliable methods of resolving doubt in inquiry are grounded on the methods of science, but those methods are not some absolute that can be discovered with certainty. Even with perfect method and theory, your observation task can never be complete and is necessarily from our narrow perspective. Confusion about this is what leads to the so-called observer problem. There can be no observers without semiotics, without physical systems that carry meaning within systems and between the parts of complex systems. Peirce's language that describes scientifically observed regularities as habits seems odd to us at first, but we don't know and may never know how the laws as we know them came to be how they are. If quantum objects really are like Leibniz' monads, couldn't they have just enough semiotic capacity to maintain their habits of relating as described in physical laws?
If you read Peirce closely you will see that he anticipates and resolves a lot of paradoxical questions that were not yet described by anyone to speak of. If they were, they are largely lost. The monad doesn't need any complex kind of consciousness to take a habit and share that with all the similar monads generated in the big bang and cosmic inflation period. Even if you can't find ways to test these ideas, they represent a much cleaner way to allow for other meanings and systems we do not have access to (yet) to operate independently of us. This is after all the center of our conception of the real world, the bits that don't respond to what we think about them. But what about the parts that do? What parts would those be?
Debates about the reality of free will aside, if the concept is to have any meaning, there must be agency (single or multiple) where semiotic processes have physical effects on the operation of systems. This can really only be through a fully developed ontology of signs and sign actions. Neurology and biosemiotics are starting to give us a language to talk about this. Without anthropomorphizing, it is clear in zoosemiotics that all biological systems respond to and generate signs. The cognitive systems must have a kind of code duality (ref paper) where internal languages of neural and chemical signals cascade directly from centers that receive and perceive signals to decision and action response systems that keep the organism coordinated with its environment. This is a bit beyond our purpose here of considering complex, collectively produced digital semiotic systems for the augmentation of human intelligence, but with where the bleeding edge of cognitive systems is leading us, it isn't that far afield. For now we will focus on what is more well understood about digital, formal systems of more traditional information systems and technology.
A great way to describe the purpose of the field of automated information systems is the lens of augmentation of human intellect. Some worry that automation will replace the person in all ways, but I say that will only happen if we lose sight of the human orientation of the motivations to build these systems in the first place. In their best uses that are pushing the capacities of the world's computational engines, the applications are scientific and these engines far from limiting human creativity have kept opening up whole new vistas through this augmentation process. If we can describe whole systems as networks of semiotic processors and agents, there is the possibility of a comprehensive method to connect all fields of knowledge and make vast collections of gathered and curated data available to a collective cognitive.
The solution to the runaway evil AI is the same as the one to the possibility of the evil genius. Those stories are just for fun because some of us like to be scared by stories. How could a collective intelligence (CI or AGI) go bad? The first example of CI has been emerging in open source (OS) software and commons based peer production (CBPP, ref Benkler Wealth of Networks). It isn't hard to see how the OS principle of many eyeballs find more bugs is the same principle that makes having a secret evil purpose concealed within the system's codes unlikely.
Semiotic thinking is also productive in trying to understand and develop better tools and methods to deal with crisis. A new virus in the human population needs this kind of analysis at all levels from the signaling and languages of cells that the virus populations interact with to the social dynamics and social aspects of the spread of pathogens. The response is all socio-cultural language acts and how that drills down to behaviors that interact with the lower layers of the mind-independent presense of the virus in the cells. Is there a test, can that be operationalized in public health policy?
Good public health policy and civil preparadness and security can't emerge in a context of political scortched earth tactics and strategy. A public that has been trained to suspect science based reasoning has disconnected from reality and is in great danger from motivated messaging. Too much public policy is debated in private and sold by any means that works. As agents of our time and place we are called on to help increase our stores or knowledge and wisdom. Some officials are messing with the data collection and creating dark data intentionally, and so we should define this concept. Dark data like dark matter and energy is there, but you have no access to it. If data is corrupted or not taken at its source, it will be dark. All time series data has a dark beginning before it was collected systematically, but we are talking here about data outages. This can be loss of signal from a planetary mission that if permanent, ends the mission. If officials mess with the data at its source for political gain, that is a loss for everyone. Fortunately, good data science technique can intelligently restore the data from corraborative sources and even highlight the cause of the outage.
As typed mathematical objects we have vectors, which could be indexable. The simplest possible vector is the bit, binary or boolean string, chunked into bytes and words in machine level representations and hardware realizations. These bytes or words are typically indexed within working memories. Any particular hardware will have to specify in detail these raw syntaxes, and system and language tools realize the actual syntaxes including binary coded raw programs or machine codes. For our purposes, we only need to know that a text string is a vector of whole number character codes from some set of icons. A bit string can be just a text with a two character alphabet, although in practice raw binary texts are most efficiently realized according to commons conventions.
Our concern here is only the types of objects that occur at the level of realization, binary strings for machine codes and bitstrings, a representation of text strings as bitstrings. Plus the idea of raw pointers, indexes to locations in the store. This is the scope of what the implementation syntax will need to do. Although we mention machine codes, the description of any set of operations and deeper syntax are required in concrete realizations is intentionally not at this level. We will also stop short of syntax descriptions and define code attributes to be in a general syntax that would be translated easily to any modern development language for integration with other systems.
The Decko Card model I describe here will be close to the current implementation that holds the versions of this text. This is a familiar pattern now to many, but we must note that it did not exist until Ward invented and named it Wiki. In this sense, our languages, natural and formal are always evolving the intelligence of our worlds. The communities of open source (OS) coders, designers and architects are collaborating to do this everyday. You literally cannot keep up with it. This work is addressed to the now and future generations who will step up to change the world and put us back on track to the open future. We call for the minds of the world to learn to come together in every larger and densely interconnected networks of care and support.
We won't need a financial system anything like the one we have that is literally burning down the house and gardens of the world. If we damage Gaia, we will suffer as we have already inflicted suffering upon many others in the path to economic efficiency and technological supremacy. We will need a new financial system for the transition to fund commons development of all the projects already in motion. Consider this paying back to the first generations of rebels who created the GPL (ref FSF, RMS). The founders paid it forward to us and our duty is to take it to the next levels. This is not an economic paper, and we only note the architectures of support that is needed to evolve our collective vision into realizable future. The tools of the transition financial system are part of the systems that eventually would be designed and built by these methods.
Before continuing on the technical descriptions, we state that all of our design work is human oriented and in support of health and wealth creating (social) process architectures. We can't do the big things or the other things (ref JFK speech about space program) without first caring for ourselves and others. Our world is now very complex and deeply connected and internetworked. We anticipate the emergence of new collective consciousnesses. First in smallish networks and building out. In comparison what we describe here is simple, complicated and simple. We design our systems this way because the world is complex enough as it is. Push to hard for unity and confusion and disharmony result. The world is just so many people building local towers to God's realm (ref Tower of Babel) and it has to stop. Pull together to something good enough, the perfect is the enemy of the good and this is why.
The general class is Wiki, so let us consider what is common to Wikis. The general model is named data objects in collections. The objects can be edited by users often under very open identity policies, but these are not essential. So we have:
Name -> Content Instance within a Namespace
The current State of a Namespace is a Set of (Name, Content) pairs
Name a way to reference the particular content to view and edit
Any particular wiki platform will need to further define the syntax and semantics of the constituents and we will proceed as one would with a mathematical proof or demonstration only defining and terms and concepts that we use in sufficient detail to identify the particulars, but also making the continuum of possible mappings apparent such that it can be applied flexibly to the categories described.
We start with the Card objects themselves, the (Name, Type (another Card), Content) triples. The Name is both the human handle and for referencing in URIs, content and code. The use of names are references is what dictates the semantics of names. Because we will use names to identify the pairs, multiple instances of the same name might get different accomodations on different platforms. Decko Cards also have another attribute, a Cardtype which is also a Card. Already we see that a Card object is not complete in itself, but always has a Cardtype to select semantics. Therefore we have Cards:
Every Card has:
A Name which reduces to a keyname equivalence class (name -> key function: Keyname)
A Card may be assigned a permanent identifier such as a numeric CardID
A Cardtype which is another Card
Names can be simple or compound, a list of simple names
Optionally, an additional permanent name to be used in code can be assigned to some cards. (make this a footnote? ->) This might be considered an artifact of the customizable code actually being separate from the other rules (Settings that can be appended to Set cards), but maybe it has more to do with multiple namespaces (data and code) and we may need to generalize the codename concept.
An Optional Content Object, Content persistence and semantics can be categorized by Sets
This is as far as we can get without putting Cards into contexts. Current implementations are single deck namespaces where the keys are unique and CardID are sequentially assigned record identifiers in the underlying database. We do not specify these details at this level as this is not a syntactic specification. We will describe multi-deck semantics in terms of these object oriented ontologies we are developing.
A Deck can be constructed by standard operations that add, update and delete cards from the set that is the current state of a deck. Note that the data representation can be mutable or immutable with the latter having a number of advantages for caching, tracking and managing change sets as is done with code objects and more with tools like git and github. With a similar data model to git's distributed change management model the implementation can support flexible workflow tools for Card content in complex networks of collaborating producers.
Deck semantics: Names (via key reduction) are unique in a given deck.
Cardtype might be used to distinguish cards with same name, but is not is reference implementation
Set Patterns are defined with name patterns.
Special cards are the indicators of patterns
These cards have zero to two (reference implemenation, could be extended) slots that specify the pattern parameters.
Zero slot keys are simple Cards that represent the sets with no slots, for example all cards *all or all cards with first character *, *star.
One slot uses the card in the slot as the Set selector, for example, ACardtype+*type is a single slot set representing all cards of a given type from the slot, ACardtype in this case.
More slots is more parameters, so a right selector with a type, or in a WikiRate extension, two types for the left and right parts of a given card.
Card Parts: When a cardname has more than one part, is not simple, it has parts, the tag or right part that can be matched with *right Sets, and a trunk of left part. The right part is always simple, therefore Cards are constructed from a trunk card by adding to the right, adding a tag.
Settings and Rules: Another special type of simple cards are Settings, that when added (as right part or tag) to a Set card trunk become a rule for the trunk card (Set) such that the Settomg is defined for that Set.
Precedence: When multiple rules match a card for a given Setting, the most specific rule is used as defined by an ordering of the Set keys.
Code Rules: view formatting and implementation details are to be organized or generated according to Sets and Set Precedence definitions. In the Ruby reference implementation this maps naturally onto a loading process for singleton classes for each Card object. Similar mechanisms are available in most object oriented languages. Defining modular code in a universal implementation language can be accomplished with modern tools as is and extended. This is the next step from this high level description. A tool should be able to extract universal code from the RoR implementation.
Multiple Decks are a critical design topic for Card Systems to grow into diverse networks of content. We want to define how Namespaces and having many Decks might function in an evolving network of heterogeneous content. Eventually, you should be able to load a specification of MediaWiki and have naming support for external MediaWiki instances. With proper linkages both to ReSTful APIs on each platform and Federation (ref Ward's FedWiki project) are expected to lead to processes of workflow production chains for a broad class of content.
Decko is design with Web access and APIs in mind, and in the implementation the Names are seemlessly transitioned from URI space to content links and code references to cards. Since we don't have a proposed
Will require some refactoring in the structure of this card. Adding a separate card to define terms before talking meanings, then syntax:
After having already gotten some ways into this, I realize some of the initial premises were a bit off the mark. Some such are 1) cardspace logic doesn't actually depend on names, they are indepentent language grounded symbol spaces that are used to bind cardspace objects and structures, but are secondary to it. A separate (sub)system linked through type logic and methods. 2) Neither names, nor content are atomic. 3) Initial insights on combining namespaces being like mounting a filesystem in a filenamespace. Now we will consider that Namespace1+Namespace2 => Namespace3 such that it is defined how the bindings of 1 and 2 mix, particularly with collisions.
We must go through types, and more generally for card systems through set based subspaces, to be able to fully identify the type and specific methods bound within cardspaces. Thus our first definition:
CardIdentity is a typed token that relates two Cards in a equivalence relation.
These tokens may be scoped to a particular group of namespaces, therefore to join namespaces as above, we would translate Identity tokens as needed. It would be well defined, and in some scopes this can be integer tokens from lower layers of the representation (i.e. active record id fields in primary card db tables), but we can know these relative scopes with precision and hide in the implementations, expose for debugging.
SimpleCards are the only cards that can be bound in a namespace, thus implicitely naming any joined names and keys.
SimpleCards must be bound to a CardIdentity to be used, and conventionally it always has one or more names.
JointCard is Card joined SimpleCard
Card is JointCard or SimpleCard
Notably asymmetric. Maybe you want this branching on the right vs. the left as here. Both may be a semantics someone wants, but we do not consider it here.
Codenames are language independent names that are bound in codespaces.
Codenames are never translated and are not within any natural language. When codenamed cards are not given language specific names, any forms of translation may be invoked because these cards and their CardIdentities are permanently bound in and to the code.
Cardnames are the natural language bindings for a card.
These are the human presentable representations of cards. As described above, only SimpleCards can be bound to names and any JointCards.
SimpleCards don't have to actually exist to have names, and thus virtual JointCards can be constructed and we don't necessarily require that all the SimpleCard parts of a given JointCard actually exist. On the other hand, implementations may need to create some of these virtual parts in order to permanently bind to correct CardIdentity and CardKeys. Which reminds me we need:
CardIcon is an alternative graphical icon bound to a card as an iconic variation.
This is speculative, and might give some additional power. Icons are potentially (if well designed) international signs with meanings translatable into languages. These probably should have codenames, and could have a special category of key that is related to the codename. Otherwise a textual name might need to be based on either an index within category (icons?) or a CardID token.
Cardkey is an identity token that represents a unique name in all its variations.
E.g. for a Date, this might be and integer day number with a speced history bound DayZero. Or numerics might take on syntax from their NameSpace. This is speculative and we are exploring the future concept for Card Decks of Namespaces.
Current thinking is that Names bind within Namspaces and we need to define the semantics of namespace stacking to resolve:
FreeNames are symbolic atoms used in expressions, CardNames in all their complexities, CodeNames and CardIdentities for reference and storage/networked representation and reference.
Decks are maps of structured content
We are describing (mostly) a deck of Cards Type of Namespace with complete definitions and specifications of the Types above.
Namespace1 + Namespace2 => Namespace3
When the added namespaces are compatible and disjoint, addition is pretty simple and natural, just add as two sets of Name(or just ID)xTypedValue pairs. Name collisions are the complexity when not disjoint. When this occurs a semantics, a rule for choice can be used to resolve them.
Even without all the necessary formalisms and proofs, what we have here is a simple (relatively) system of named data with structures that can be expressed by decomposing and recomposing equivalent Namespace instances according to these structures in the service of defining semantic relations within the structures.
Functions defined with respect to these types and structures give very expressing high level languages that build on these relations. Content has only been mentioned in passing, and the essential semantic bit is that some content types are capable of representing references to names and cards. The important part here is not the reference syntax nor that of embedding within the content, but that they can be enumerated, indexed and processed recursively in views.
Content is what all this complexity is about. We define two related categories that represent special namespaces in how code is defined, Formatters and Views.
It's something like each Formatter has a name (codename, cards are not needed or implemented at present) and any number of named (again, these are in effect codenames) views can be defined in one or more formatters. There are some views defined for all formatters, and potentially overriden via the logic of adding namespaces. These view names are actually accessible in syntax as the name of views in references (nests) and the view parameter from URL query parameters. These someday will need translatable cards to give them proper inclusion in namespaces.
Just added raw sketches for now. First one has the Card categories and the parts. Trying to represent the basic structures and partitioning of Cards as a category. There are different methods to identify cards represented here, and these will all relate to their identity within Decks, or sets of cards that are expanded in the second image. Note the partition of all cards strictly into Simple, with no parts and a name, and Joints that are composed of two (existing) cards, and by definition get a name by joining the part names with a syntax charcter (conventionally + in Decko).
The other partition is an real/virtual. Simple Cards must also be real, but because of rules, a card doesn't need to exist to have (virtual) content. The typical case is the join of some real card with a tag, and matching a *structure rule the supplies the cards virtual content. Real cards will have a CardIdentity. Only real cards can be partitioned from a Deck and used in operations to compose virtual content.
In +i2, we have some images for Decks. Operations, references and Sets that reference subsets or partitions of a Deck state, and we show a couple of examples of how a given Cards are functional parameters are mapped to partitions. It shows how the parts of a Set Card are like arguments to the pattern; there are 0, 1 and 2 parameter patterns in Decko implementations, but there could be more as needed. Here the parameters are parts of the card, and in the search language CQL (Card Query Language), there are likewise references to Cards to, for example, match a particular type by referencing the Cardtype card, or cards whose right part matches the tag card.
Next line is showing how paths in code are related to the cards. Only cards with Codenames can be referenced in code, and within the 'sets' paths the base path is just a reverse of the Set Cards: .../key/p1code/p2code/name.rb This is Ruby, but any OO language can support something like this to overload modular codes onto the relavent object variants.
Below, is the idea that it would be nice to have the code in Cardname space, and they would be joined onto the sets just like Settings are to make Rules. Only instead of a Setting, we would have a view and possibly a Formatter. In the actual code, there are many views and sometimes helper methods (on Cards) defined in each file and the path locates these files in the Ruby module/class namespaces.
And now in another card, a start at a formal enough spec to feed tools. This will be an experiment in Diagramatic Immanence
I'm interested in feedback from the syntactic descriptions to understand the necessary structure in the sematic type/category and objects spaces. Categorically, Cards are representation generators. In Decko currently this is Format x View as an operator or functor space over Card Decks, and mutations of Decks via a well defined Deck monad make things all very cachable and efficient as well as hiding fast pace of all changes in scalled infrastructure in the corners generating and testing their additions to Decks for one purpose or another.