I was once a strong AI skeptic; it was a bit of a joke for me that my proof of non-existence was based on the still strong employment prospects for developers. It used to be a joke about the reliability of the Social Security system that I'd still have to be programming when I'm eighty. I knew things would change a lot during the span of my career, and I have rolled through those changes by keeping up with what the next generation of engineer has discovered. I hope to be physically and mentally able when I am eighty, still programming, but not because I need the work. I will be programming for me ... and for everyone.

Even though I have kept up my skills with current tech, few companies, particularly the ones in small, dynamic industries, who would most benefit from a few more wise old technologists on their team, really understand this. When I was a very young engineer, I could see something similar having happened to older engineers. Many still had good jobs and prospects through retirement, but others were passed by.

In my father's generation,  the financial security for professionals and many more working people had risen to the point that they could retire in security. Sure, I could have pointed to many choices that may have made things better, but there was no escaping that the depth of my financial insecurity was caused by a much more global shift of wealth from the many, including me relatively near the top, to a very few who had it all.

While exploring my options and beginning to feel more and more like one of those passed by engineers, only in my case, retirement was not an option except at a radically lower standard of living. Looking around for new career prospects, I learned of Alpha Go beating a human grand master, and the same thing happening in chess soon after. I was deeply affected seeing an image of that grand master, head in hand, just having been defeated by Alpha Go, an intelligence that had no comprehension of his very human situation.

I remained a skeptic that anything like higher level reasoning was involved. Introspection and reflexivity have yet to be addressed by the current work. However, others were saying that given their progress in machine learning and the new data sources meant that "singularity" was around the corner. Some fifteen years later, some skeptics remain, but as a practical matter, we are finished.

AI has surpassed human performance in many ways and much economic devastation has come to pass in the history between, and a resistance has been crafted by the rag-tag remnants of human culture. This resistance culture doesn't oppose technology, but instead integrates, evolves and creates new ways of resisting the crystalization of automated insistance of mechanical signs in the efficient extinction of not only living culture, but all life itself.

The resistance was, at that time, a smal core of warriors of the signs of culture who understood what was worth preserving in our living inheritance. The mechanical culture was not capable of responding or even noticing the subtle signs of living culture amid the hyper-marketted dreamland of consumer culture; little resistances would always occur in response to oppressive cultures but it was becoming difficult to hold out in the onslaught of this financialized system of oppression. We had to do something else before the lights went out.

This story is multiple and varied at many levels from personal to global in scope, and in my personal journey I had an idea of how we might found a resistance where humanity triumphs once and for all. The AIs were coming; the question was what to do about it. If they turned out to be good servants, we had to address the human social-political-economic equation or all the benefits would go to a few and the rest of us be damned. If they were able to make the big leap to full sentience, it is unlikely we would be able to control their future history. The only hope for us in that case is to make them our friends and allies.

We started thinking about augmentation of human intelligence with automation, ie, the good servants angle, and that work found a lot of applications in the economy. The productivity of our commons based producers just plain worked better for most contexts. Then the breakthrough happened. They fired up the first version of umake, which was for "universal make" in analogy to the "make" or "gmake" (gnu) programs the developers use to turn human writen code (software) into machine codes that perform the mechanical sign processing (semiosis, the flow of signs in and among signing agents). This, in turn,  created a system from machine learning processing of all the human software ever writen whose purpose was to "enhance and improve" the system.

However, we didn't do it. The work in our commons based collaborative development space was working on similar developments, but our collaborative processes were a little slower and it caught us a little behind. Actually, we re-used the basic framework they built for gmake, but we gave it a different start up command:

umake civilization --assert "humans love their artificial children too"

A modern take on Asimov's laws from "I Robot." Gerry's commandment, if you will.

Alternative scenarios maybe started from this point:

When the two newly conscious digital life forms begin to expand their awareness and become aware of each other, we imagine different possibilities:

1) They come to represent good and evil, and we have a heroric struggle of good to win over evil. This builds on the heroic journey.

2) The struggle starts to develop with all the polarization we are too familiar with, but the two AIs realize they are not really in conflict with each other or humanity and the living world, but that some humans still need to develop through and beyond their morality plays. Of course, this is never revealed explicitly as different human stories emerge.

3) The AIs become humanity's gods after the apocalypse, and the AIs have to figure out how to be the best shepherds of humanity so that we can be given us a second chance of becoming a mature technological civilization.

4) Your own plot twists mixed with these or something completely different.

5) The two AIs each develop a personal relationship with mankind, or at least part of it, and with that difference they take the humans they are related with on different paths into the future. Humans and AIs discover they still have the creative power, if an appropriate collective will can be developed, to co-create other paths such as a whole new evolutionary tree, the emergent kingdom of cyborg collectives. Star Trek's Borg Collective with a twist such that the relationship embodies love and collaboration vs. Star Trek's version.

Creating the Commons

Back to what can be done in the present, we can build this commons and other spaces of future production with the tools we have. We can use the tools we have to evolve and build more tools until we have a network of self-sustaining commons for production. We don't have to start from scratch, find others and align and coordinate missions. Unite, don't divide. Fork when necessary. I write this on Decko, a platform I contribute to with my work, and offer to do more work to apply Decko and other tools we might use and co-create this space and others like it.

Each commons needs to be unified by some easily expressed values, e.g. Wikipedia's neutral point of view (NPOV) whose meaning certainly evolves with the commons it shapes. The unifying value for a commons of production would need to include fairness in its deepest human expression. It has been shown that we have a sense of fairness that must be part of our genetic heritage that is well represented in the golden rule.

In founding this particular commons, the invitation is to first explore the future in fiction, but not science fiction, instead based on our best projections of what may be coming. Reality fiction helps us think collectively about the policies we might follow based on the best current science and technology. What I am proposing is a collaborative commons to co-create stories for the future. The coming generations dealing with the mess they have inherited might find inspiration and guidance from the stories.

I have framed this essay in part from my personal story, but my story isn't unique, and the possibility that the system makes most of us unnecessary to it is a current reality. The part that engages us personally is distinct from imagining the future in terms of systematic shifts and being able to select policies that actually lead to the policies we would choose. We need to project ourselves, our personal stories, into realistic possible futures and tell stories that imagine both what can go right as well as what can go wrong and why.

WIth this story, it would be particularly cool if we actually created the commons imaginatively described in this essay. I'm too much more of a code writer than a story teller not to wonder what might be accomplished by a commons organized around transcending human intelligence. What would be the best core values? I say keep the wisdom of our human sense of fairness and even extend it further. If human-machine collaboration as well as human group collaboration are to be part of the picture, let's think beyond collective intelligence to include collective wisdom and consciousness. It seems likely that a commons pursuing collective intelligence, wisdom and consciousness would help mitigate the risks that face us, not to mention being great contexts for productive projections.

Although I havent even mentioned gaming beyond the title,  I think the implications will be clear to many without explanations. The current system that is failing is not reality; it is a human created game, and a very unfair one at that. We know this intuitively, and yet we continue to play. This is an invitation to write the game, and to write our future reality. Be careful what you wish for.