expand_less Introduction
Following Peirce, the general is a continuum and no set of individuals can fully characterize the general, and yet theygenerals (generals) are real. We are abstracting an initual Card Model from the Ruby on Rails implementation of the Wiki content platform Decko (ref decko.org). ThatIt is toa beparticularly consideredflexible an example of a generalgeneral, or category, that conceptuallycan includesinclude all web content,content; but more specifically can be a master set of content patterns. DeckoThis descriptive purpose we will apply mathematics from category (CT) and Mediawikitype (ref)(TT) probablytheory in order to make use of the mosttools popularand wikimethods platform,of these disciplines. Our goal is no less that the formal description of hyperlinked content in general, and alsoto Wardbegin Cunnunham'sa exploratorylarger mission of better supporting the growing global networks of content. The real work thatis setthe production and curation of the standarddata foras forms of content. This is already what we have been doing since before the restdawn toof come.the Internet as we now know it.
Decko, Mediawiki (ref), Twitter, Wordpress content or even an arbitrary relational database can be mapped onto the types and t Ward Cunnunham's exploratory work on FedWiki (ref)
Our goal is a natural language specification of general concepts, wiki, microblog, mediastream, database and so on, that will lead to formal languages and methods.
The greater purpose of this document is to explore new ways to describe systems designs and architectures. New work in type theory and related functional programming languages provide a common basis that like mathematics, logic and cybernetics can describe forms and structures that are not limited by their formal structures. This is to say that their structure is drawn from unbounded spaces of possible structures that is independent and beyond their actual formal expression in any particular syntax or grammar. A goal is to describe an abstract system model independent of syntax such that when these descriptions are then linked to particular syntaxes, the particulars can be completely accounted for with tools and automation.
What is being presented for deeper consideration is the relationship of formal languages and natural language. Mathematicians don't write papers primarily in math symbols and formulas, those are short-hands and sometimes aids to thinking collaboratively about problems. The guts of any proof is not the formal steps, but the explaination of what they mean. What we need as designers are tools that help us view and manipulate all of the work products of all the design teams of complex projects, and we are limited to only the sets of tools and languages that are actually available, but there is no place within formalisms that is generalized for all of the particulars that are actually within the work practices of design engineers.
A central part of the founding works of category theory is the abstraction of the equality relation. What does it mean for two mathematical objects, or two quantum states to be equal or equivalent, and is there a difference in kinds of equality. Equality and identity are now ever more deeply tied to isomorphism, whether two mathematical systems or objects can be exactly transformed into one another. Preserving relations, that's the key part, in math and abstraction, all in founded on relations. All permanent discovered truths are in and within the systems of relations that they relate.
When we design, architect and build information processing and intelligence augmenting tools in formal languages, abstract relations are made concrete. Taken from the infinite spaces of possible Turing Machines with potentially infinite binary tapes, we enact and embody many connectable corners in actual digital machines and stores connected in global fiber and radio links, but that still exponentially growing actuality will be ever dwarfed by the Surreal Number (ref Conway) orders they are drawn from.
Most important is the way those structures are dual to the world in a way much like our own minds. As we realize both that our brain constructs the world our mind sees from programs drawn from experence encoded at all time scales in our genetic, cultural and individual histories. Realize that we learn collectively at all of these scales in the spacetime contexts that are active and available to us at the point of action; now!
This layer of description is the bridge from the design context to any systems solutions. It is where the objects and signs are the common sense objects that we all encounter in different ways. Our examples are drawn from information networks and systems, which have a particularly complex relationship with collective intelligibility and sense making. At the surface they are the ordinary objects of grammatical natural language, but the details are shot through with specilization on jargons that are specific in reference within a small corner of the knowledge space. Which is to say that much knowledge is highly specialized, so we need also to bridge from ordinary language and create new commons sense intuitions grounded in the best integration of the best of all the specialties.
Modern people need to be brought up with a start at understanding the new contexts of tools and methods whose scope and usefulness are open. They will have to help those who got there earlier, and will have many opportunities to achieve recognition for good work, great work for those up to that level. Just because you have control of each little part of the system, doesn't mean you know what will happen when you build it. Sometimes the bridge falls down, the system crashes and needs rebooting. Then we learn how to design and build to expect failures, to test for easy errors before full systems tests. Rockets sometimes crash, some failures have a very high cost.
We are in the space of open social media where content is semi-permanantely published into different spaces that we will call namespaces in this text. This term will have specific meaning in examples, but we will define it generally in type theoretic terms. That will be the semantics of our bridge building such that type theory descriptions can link directly to different domains for modelling compenents and subsystems in design/engineering analysis within the specialties invoked by the specific domains.
There are already a million and one proposed universal solutions, and we will not propose another. What we propose is a common language for talking and thinking collectively about and with information systems. We want a truly high level language, natural language. American English for this author, but our language must allow for accurate translations to and from any language. We note here that languages, natural and formal, are also related continua as are the iconic characters now exhaustively cataloged in international coding standards (ref current internet character standards). By the continuum hypothesis, this international character set is not complete, but it is potentially complete as we can add new characters to evolve it if we need to add sets for the space aliiens we might meet in fact or fiction. In the worlds of ideas, anything we can imagine becomes potential fact of letters typed on a page on an antique typewriter, or these bits flowing into my computer and saved by server code partly written by me.
{{Metamagical Groundings|titled}}
Syntax and Semantics
We will refer to specific syntax and relate semantics to syntax as working examples, reference implementations to help bootstrap the work with foundations. Philosophically, we recognize the need for some structures before reasoning can even begin, and remaining faithful to the doctrines of signs and the pragmatic maxims will serve us well in a world of sign systems that are ultimately grounded in loops of semiotic self-reference. It's signs all the way down. Kant's a priori loses all grounding because what is prior is just another big sign that must be taken as given to even start. After we have complex models of thinking, cognitive behaviors, we can speculate as to the a priori foundations that might produce it, but the reasoning and observations are also all a postiori constructions. We've just forgotten who the foundations are constructed by and for, then from this false certainty make unreasonable claims of necessary knowledge. This criticism goes as strong against the perveyors of scientism as the ones who sell spiritualism or one flavor of God or gods as ontology. Peirce and those who are expanding his work are just more clearly pointing out that there is no real difference in the logical foundations of one ontology or another. On what then do we ground our search for meaning? I see no better guide than some form of a pragmatic maxim, a standard that future inquiry may update as needed.
The message for design and architecture of future systems and cutural artifacts is that we are not limited by the ideas of the past. Yes, the only reliable methods of resolving doubt in inquiry are grounded on the methods of science, but those methods are not some absolute that can be discovered with certainty. Even with perfect method and theory, your observation task can never be complete and is necessarily from our narrow perspective. Confusion about this is what leads to the so-called observer problem. There can be no observers without semiotics, without physical systems that carry meaning within systems and between the parts of complex systems. Peirce's language that describes scientifically observed regularities as habits seems odd to us at first, but we don't know and may never know how the laws as we know them came to be how they are. If quantum objects really are like Leibniz' monads, couldn't they have just enough semiotic capacity to maintain their habits of relating as described in physical laws?
If you read Peirce closely you will see that he anticipates and resolves a lot of paradoxical questions that were not yet described by anyone to speak of. If they were, they are largely lost. The monad doesn't need any complex kind of consciousness to take a habit and share that with all the similar monads generated in the big bang and cosmic inflation period. Even if you can't find ways to test these ideas, they represent a much cleaner way to allow for other meanings and systems we do not have access to (yet) to operate independently of us. This is after all the center of our conception of the real world, the bits that don't respond to what we think about them. But what about the parts that do? What parts would those be?
Debates about the reality of free will aside, if the concept is to have any meaning, there must be agency (single or multiple) where semiotic processes have physical effects on the operation of systems. This can really only be through a fully developed ontology of signs and sign actions. Neurology and biosemiotics are starting to give us a language to talk about this. Without anthropomorphizing, it is clear in zoosemiotics that all biological systems respond to and generate signs. The cognitive systems must have a kind of code duality (ref paper) where internal languages of neural and chemical signals cascade directly from centers that receive and perceive signals to decision and action response systems that keep the organism coordinated with its environment. This is a bit beyond our purpose here of considering complex, collectively produced digital semiotic systems for the augmentation of human intelligence, but with where the bleeding edge of cognitive systems is leading us, it isn't that far afield. For now we will focus on what is more well understood about digital, formal systems of more traditional information systems and technology.
Kinds of Signs: Formal Linguistics Systems (aka traditional IS/IT)
A great way to describe the purpose of the field of automated information systems is the lens of augmentation of human intellect. Some worry that automation will replace the person in all ways, but I say that will only happen if we lose sight of the human orientation of the motivations to build these systems in the first place. In their best uses that are pushing the capacities of the world's computational engines, the applications are scientific and these engines far from limiting human creativity have kept opening up whole new vistas through this augmentation process. If we can describe whole systems as networks of semiotic processors and agents, there is the possibility of a comprehensive method to connect all fields of knowledge and make vast collections of gathered and curated data available to a collective cognitive.
The solution to the runaway evil AI is the same as the one to the possibility of the evil genius. Those stories are just for fun because some of us like to be scared by stories. How could a collective intelligence (CI or AGI) go bad? The first example of CI has been emerging in open source (OS) software and commons based peer production (CBPP, ref Benkler Wealth of Networks). It isn't hard to see how the OS principle of many eyeballs find more bugs is the same principle that makes having a secret evil purpose concealed within the system's codes unlikely.
Semiotic thinking is also productive in trying to understand and develop better tools and methods to deal with crisis. A new virus in the human population needs this kind of analysis at all levels from the signaling and languages of cells that the virus populations interact with to the social dynamics and social aspects of the spread of pathogens. The response is all socio-cultural language acts and how that drills down to behaviors that interact with the lower layers of the mind-independent presense of the virus in the cells. Is there a test, can that be operationalized in public health policy?
Good public health policy and civil preparadness and security can't emerge in a context of political scortched earth tactics and strategy. A public that has been trained to suspect science based reasoning has disconnected from reality and is in great danger from motivated messaging. Too much public policy is debated in private and sold by any means that works. As agents of our time and place we are called on to help increase our stores or knowledge and wisdom. Some officials are messing with the data collection and creating dark data intentionally, and so we should define this concept. Dark data like dark matter and energy is there, but you have no access to it. If data is corrupted or not taken at its source, it will be dark. All time series data has a dark beginning before it was collected systematically, but we are talking here about data outages. This can be loss of signal from a planetary mission that if permanent, ends the mission. If officials mess with the data at its source for political gain, that is a loss for everyone. Fortunately, good data science technique can intelligently restore the data from corraborative sources and even highlight the cause of the outage.
As typed mathematical objects we have vectors, which could be indexable. The simplest possible vector is the bit, binary or boolean string, chunked into bytes and words in machine level representations and hardware realizations. These bytes or words are typically indexed within working memories. Any particular hardware will have to specify in detail these raw syntaxes, and system and language tools realize the actual syntaxes including binary coded raw programs or machine codes. For our purposes, we only need to know that a text string is a vector of whole number character codes from some set of icons. A bit string can be just a text with a two character alphabet, although in practice raw binary texts are most efficiently realized according to commons conventions.
Our concern here is only the types of objects that occur at the level of realization, binary strings for machine codes and bitstrings, a representation of text strings as bitstrings. Plus the idea of raw pointers, indexes to locations in the store. This is the scope of what the implementation syntax will need to do. Although we mention machine codes, the description of any set of operations and deeper syntax are required in concrete realizations is intentionally not at this level. We will also stop short of syntax descriptions and define code attributes to be in a general syntax that would be translated easily to any modern development language for integration with other systems.
The Decko Card model I describe here will be close to the current implementation that holds the versions of this text. This is a familiar pattern now to many, but we must note that it did not exist until Ward invented and named it Wiki. In this sense, our languages, natural and formal are always evolving the intelligence of our worlds. The communities of open source (OS) coders, designers and architects are collaborating to do this everyday. You literally cannot keep up with it. This work is addressed to the now and future generations who will step up to change the world and put us back on track to the open future. We call for the minds of the world to learn to come together in every larger and densely interconnected networks of care and support.
We won't need a financial system anything like the one we have that is literally burning down the house and gardens of the world. If we damage Gaia, we will suffer as we have already inflicted suffering upon many others in the path to economic efficiency and technological supremacy. We will need a new financial system for the transition to fund commons development of all the projects already in motion. Consider this paying back to the first generations of rebels who created the GPL (ref FSF, RMS). The founders paid it forward to us and our duty is to take it to the next levels. This is not an economic paper, and we only note the architectures of support that is needed to evolve our collective vision into realizable future. The tools of the transition financial system are part of the systems that eventually would be designed and built by these methods.
Before continuing on the technical descriptions, we state that all of our design work is human oriented and in support of health and wealth creating (social) process architectures. We can't do the big things or the other things (ref JFK speech about space program) without first caring for ourselves and others. Our world is now very complex and deeply connected and internetworked. We anticipate the emergence of new collective consciousnesses. First in smallish networks and building out. In comparison what we describe here is simple, complicated and simple. We design our systems this way because the world is complex enough as it is. Push to hard for unity and confusion and disharmony result. The world is just so many people building local towers to God's realm (ref Tower of Babel) and it has to stop. Pull together to something good enough, the perfect is the enemy of the good and this is why.
The general class is Wiki, so let us consider what is common to Wikis. The general model is named data objects in collections. The objects can be edited by users often under very open identity policies, but these are not essential. So we have:
Name -> Content Instance within a Namespace
The current State of a Namespace is a Set of (Name, Content) pairs
Name a way to reference the particular content to view and edit
Any particular wiki platform will need to further define the syntax and semantics of the constituents and we will proceed as one would with a mathematical proof or demonstration only defining and terms and concepts that we use in sufficient detail to identify the particulars, but also making the continuum of possible mappings apparent such that it can be applied flexibly to the categories described.
We start with the Card objects themselves, the (Name, Type (another Card), Content) triples. The Name is both the human handle and for referencing in URIs, content and code. The use of names are references is what dictates the semantics of names. Because we will use names to identify the pairs, multiple instances of the same name might get different accomodations on different platforms. Decko Cards also have another attribute, a Cardtype which is also a Card. Already we see that a Card object is not complete in itself, but always has a Cardtype to select semantics. Therefore we have Cards:
Every Card has:
A Name which reduces to a keyname equivalence class (name -> key function: Keyname)
A Card may be assigned a permanent identifier such as a numeric CardID
A Cardtype which is another Card
Names can be simple or compound, a list of simple names
Optionally, an additional permanent name to be used in code can be assigned to some cards. (make this a footnote? ->) This might be considered an artifact of the customizable code actually being separate from the other rules (Settings that can be appended to Set cards), but maybe it has more to do with multiple namespaces (data and code) and we may need to generalize the codename concept.
An Optional Content Object, Content persistence and semantics can be categorized by Sets
This is as far as we can get without putting Cards into contexts. Current implementations are single deck namespaces where the keys are unique and CardID are sequentially assigned record identifiers in the underlying database. We do not specify these details at this level as this is not a syntactic specification. We will describe multi-deck semantics in terms of these object oriented ontologies we are developing.
A Deck can be constructed by standard operations that add, update and delete cards from the set that is the current state of a deck. Note that the data representation can be mutable or immutable with the latter having a number of advantages for caching, tracking and managing change sets as is done with code objects and more with tools like git and github. With a similar data model to git's distributed change management model the implementation can support flexible workflow tools for Card content in complex networks of collaborating producers.
Deck semantics: Names (via key reduction) are unique in a given deck.
Cardtype might be used to distinguish cards with same name, but is not is reference implementation
Set Patterns are defined with name patterns.
Special cards are the indicators of patterns
These cards have zero to two (reference implemenation, could be extended) slots that specify the pattern parameters.
Zero slot keys are simple Cards that represent the sets with no slots, for example all cards *all or all cards with first character *, *star.
One slot uses the card in the slot as the Set selector, for example, ACardtype+*type is a single slot set representing all cards of a given type from the slot, ACardtype in this case.
More slots is more parameters, so a right selector with a type, or in a WikiRate extension, two types for the left and right parts of a given card.
Card Parts: When a cardname has more than one part, is not simple, it has parts, the tag or right part that can be matched with *right Sets, and a trunk of left part. The right part is always simple, therefore Cards are constructed from a trunk card by adding to the right, adding a tag.
Settings and Rules: Another special type of simple cards are Settings, that when added (as right part or tag) to a Set card trunk become a rule for the trunk card (Set) such that the Settomg is defined for that Set.
Precedence: When multiple rules match a card for a given Setting, the most specific rule is used as defined by an ordering of the Set keys.
Code Rules: view formatting and implementation details are to be organized or generated according to Sets and Set Precedence definitions. In the Ruby reference implementation this maps naturally onto a loading process for singleton classes for each Card object. Similar mechanisms are available in most object oriented languages. Defining modular code in a universal implementation language can be accomplished with modern tools as is and extended. This is the next step from this high level description. A tool should be able to extract universal code from the RoR implementation.
Multiple Decks are a critical design topic for Card Systems to grow into diverse networks of content. We want to define how Namespaces and having many Decks might function in an evolving network of heterogeneous content. Eventually, you should be able to load a specification of MediaWiki and have naming support for external MediaWiki instances. With proper linkages both to ReSTful APIs on each platform and Federation (ref Ward's FedWiki project) are expected to lead to processes of workflow production chains for a broad class of content.
Decko is design with Web access and APIs in mind, and in the implementation the Names are seemlessly transitioned from URI space to content links and code references to cards. Since we don't have a proposed
New Insights
Will require some refactoring in the structure of this card. Adding a separate card to define terms before talking meanings, then syntax:
{{Terms of Naming and Binding|titled}}
And now in another card, a start at a formal enough spec to feed tools. This will be an experiment in [[Diagramatic Immanence]]
{{A Deck Representation Syntax|titled}}