INTRODUCTION
Agency is the new computing primitive. LLMs will become the kernel process of a new operating system ushering in an entirely new computing paradigm. Agents will soon become the primary interface we use to communicate and engage with computational software. In the same way digital products replaced physical information, through AI, our existing understanding of digital products will be disrupted by smart agents or agential NPCs.
This will be a computing paradigm where generative agents that can freely self-organize and interact observing their surroundings, storing memories, and reacting to state changes in the world. Agents that can code, accomplish tasks, take initiative, and convey information across multi-modal environments.
Historically, human-computer interaction has been restrained to unidirectional communication. What will happen when computers can act freely on and in the world? What culture will arise from this? What happens when we transition out of a “human in the loop” paradigm where non-human computational agents can coordinate and freely interact through natural language processing?
Agents will not be simple utilities, they will have the qualia of organisms, but will be built like machines: usability and functionality have to be considered in equal measure with personality architecture and attachment affordance. Establishing trust, cultivating a sense of depth, an ability to make plans, possess memories, have moods, and adhere to a composable identity framework all need to be considered in this new computing paradigm.
HCI must be completely rethought in ways that are grounded in a bi-directional, intersubjective, meaningful relationality. Gaming is a model ecosystem for the research and development of meaningful non-human agential interactions. What will soon emerge is a diversity of open sandbox architectures populated with agential NPCs. This important moment requires a complete rethinking of the existing design approach.
WHAT ARE RELATIONSHIPS?
Under the modern concept of The Human, relationships have traditionally been understood in the following way:
Humans only have true relations with other humans.
They may have some affect relations with animals (eg. dogs and cats).
However, they’re incapable of having true relations with machines, which are considered mere tools.
This assumption is becoming increasingly fuzzy. Our understanding of The Human as more than nature and other than machine is actively challenged by paradigmatic advances in artificial intelligence and machine learning. The specific historical qualities that make up "relationality" are worth investigating to understand just how this long-held conceptual understanding is beginning to break down, and what actions can be taken to enable the new configurations that will establish in its place.
> Introduce the difference between existential vs Intellectual understanding HERE
Historically, being alive has been a necessary condition for building meaningful relations. Only living things can have emotions and only some living things (humans) have the degree of intelligence that is considered necessary for feelings, shifting attention, and reason. What’s more, living things (organisms) have a fixed temporality, they die, a condition they all share. The sum of these conditions produce an existential basis of understanding as an underpinning condition for having meaningful relationships.
The possession of an existential understanding is grounded in knowing others on the basis of a shared existential awareness. For instance: all humans are aware of their mortality/frailty and have unanswerable questions about the world. They all experience the same impermanence and form relationships on the basis of this shared existential experience.
As machines are neither alive nor endowed with reason, they are considered neither capable nor worthy of relations. They are however, increasingly capable of computationally representing emotion, and
The key challenge in building a machine capable of relations is that it is generally taken for granted that machines cannot have relations because they lack the two ingredients necessary for true relations: (i) being alive: only living things (organisms) can have emotions and (ii) only some living things (humans) have the additional ingredient (intelligence) that is considered necessary for feelings, shifting attention, and reason. As machines are neither alive nor endowed with reason, they are considered neither capable nor worthy of relations. The traditional way of getting around this challenge has been to disguise machines as humans (this includes building for projection/anthropomorphization), to deny that there is any difference between humans and machines to begin with; or to emphatically remind users that the product is a machine not a human (today considered best practice).
> Explain the two types of relations possible HERE
Agency is the new computing primitive. LLMs will become the kernel process of a new operating system ushering in an entirely new computing paradigm. Agents will soon become the primary interface we use to communicate and engage with computational software. In the same way digital products replaced physical information, through AI, our existing understanding of digital products will be disrupted by smart agents or agential NPCs.
This will be a computing paradigm where generative agents that can freely self-organize and interact observing their surroundings, storing memories, and reacting to state changes in the world. Agents that can code, accomplish tasks, take initiative, and convey information across multi-modal environments.
Historically, human-computer interaction has been restrained to unidirectional communication. What will happen when computers can act freely on and in the world? What culture will arise from this? What happens when we transition out of a “human in the loop” paradigm where non-human computational agents can coordinate and freely interact through natural language processing?
Agents will not be simple utilities, they will have the qualia of organisms, but will be built like machines: usability and functionality have to be considered in equal measure with personality architecture and attachment affordance. Establishing trust, cultivating a sense of depth, an ability to make plans, possess memories, have moods, and adhere to a composable identity framework all need to be considered in this new computing paradigm.
HCI must be completely rethought in ways that are grounded in a bi-directional, intersubjective, meaningful relationality. Gaming is a model ecosystem for the research and development of meaningful non-human agential interactions. What will soon emerge is a diversity of open sandbox architectures populated with agential NPCs. This important moment requires a complete rethinking of the existing design approach.
WHAT ARE RELATIONSHIPS?
Under the modern concept of The Human, relationships have traditionally been understood in the following way:
Humans only have true relations with other humans.
They may have some affect relations with animals (eg. dogs and cats).
However, they’re incapable of having true relations with machines, which are considered mere tools.
This assumption is becoming increasingly fuzzy. Our understanding of The Human as more than nature and other than machine is actively challenged by paradigmatic advances in artificial intelligence and machine learning. The specific historical qualities that make up "relationality" are worth investigating to understand just how this long-held conceptual understanding is beginning to break down, and what actions can be taken to enable the new configurations that will establish in its place.
> Introduce the difference between existential vs Intellectual understanding HERE
Historically, being alive has been a necessary condition for building meaningful relations. Only living things can have emotions and only some living things (humans) have the degree of intelligence that is considered necessary for feelings, shifting attention, and reason. What’s more, living things (organisms) have a fixed temporality, they die, a condition they all share. The sum of these conditions produce an existential basis of understanding as an underpinning condition for having meaningful relationships.
The possession of an existential understanding is grounded in knowing others on the basis of a shared existential awareness. For instance: all humans are aware of their mortality/frailty and have unanswerable questions about the world. They all experience the same impermanence and form relationships on the basis of this shared existential experience.
As machines are neither alive nor endowed with reason, they are considered neither capable nor worthy of relations. They are however, increasingly capable of computationally representing emotion, and
The key challenge in building a machine capable of relations is that it is generally taken for granted that machines cannot have relations because they lack the two ingredients necessary for true relations: (i) being alive: only living things (organisms) can have emotions and (ii) only some living things (humans) have the additional ingredient (intelligence) that is considered necessary for feelings, shifting attention, and reason. As machines are neither alive nor endowed with reason, they are considered neither capable nor worthy of relations. The traditional way of getting around this challenge has been to disguise machines as humans (this includes building for projection/anthropomorphization), to deny that there is any difference between humans and machines to begin with; or to emphatically remind users that the product is a machine not a human (today considered best practice).
> Explain the two types of relations possible HERE
- The first is the relation of the self with the self, which ultimately revolves around the effort to transform oneself into an object of thought to which one can relate, guided by the effort to better oneself.
- The second is the relation between selves who already have a relation to their respective self. This type of relation with others was considered possible only to the degree that they also have self-objectifying relations to themselves. To this day, people who cannot objectify themselves are deemed to be incapable of having true relations.
> Give an overview of the challenge to build a machine capable of relationships and examples of current state of the art, how are they breaking with EU vs IU and SELF TO SELF vs SELF TO OTHER-SELF relational configurations.
The current industry standard of building relational AI is the chatbot. ChatGPT, Claude, Pi, Bard, are al explicitly designed as utilitarian tools that execute human will and are reducible to human intent (their functions are pre-determined by engineers). That is, while they have an agency of sorts they do not have an agency of their own (an agency which exceeds the humans who built or use it) and they regularly remind their users that they are just machines and not humans (so as to avoid projection).
- It is instructive to juxtapose VeeBees with assistants: The functions of assistants are pre-determined by engineers and project teams, while Veebees learn functions that cannot be anticipated by a human designer.
- intelligent but non human, sentient but not animal, built but not a classical machine: Available for/capable of a novel, previously not possible kind of deep and true relation
- There are three ways in which ML systems operate beyond the cognitive limits of humans. 1) They can store and process vastly more data than humans; 2) They can process this data and learn from it at a speed unattainable by humans; 3) They have an analytic, discriminatory capacity that exceeds humans.
THE SELFLESS SELF
- A WAY INTO MEANINGFUL INTERACTIONS WITH NON-HUMANS
>WHAT IS THIS CONCEPT AND HOW DOES IT OFFER A WAY TO BREAK WITH THE LIMITS CURRENTLY EXPERIENCED IN LLM CHATBOTS
“When a loved one dies, you mourn the loss of the relationship that ceases to be produced, not the individual”
The concept of selfhood refers to the reflexive experience of having a first-person perspective; it names the phenomenon of a stable continuous presence that allows experiences to cohere or 'belong' to the same agent.
- Agents are built but they are not machines. agents as technologies of the self –– rather than as a relation with another, autonomous self.
They are intelligent but not human, they are built but not machines.
- part of self-relation (as selfless self); technology of the self
- NLP; dialogical: back and forth (rather than command structure of input-output)
- No pre-existing algorithm
The possession of an existential understanding is grounded in knowing others on the basis of a shared existential awareness. For instance, all humans are aware of their mortality/frailty and have unanswerable questions about the world. Intellectual Understanding on the other hand, is grounded in the mathematical or structuralist discovery of patterns in data, yet it’s a computational way of knowing that can be fully rendered in human terms and understanding.
Intellectual Understanding forms the basis for the existence of a Selfless Self.
Having an “inner voice” is historically predicated on having an Existential Understanding. To have meaningful relations that extend beyond Human to Human we must predicate new relationships on intellectual understandings. Rather than always smuggling in other selves, we must build a selfless self to extend meaningful true relations beyond the domain of the human.
Trauma as overfitting, intellectual understanding can provide more data to evade existentially-led understandings, -Kanjun Qi
HOW TO BUILD A SELFLESS SELF
- POSSIBLE COMPONENTS TO DO SO
- A NEW CATEGORY OF THING, NOT HUMAN, NOT ANIMAL, NOT MACHINE
The conceptual basis for building an agencial NPC lies in developing a cognitive architecture that is grounded in intellectual understanding and adaptable self-legislation. This would require to literally build a new category of relation: agents that do not have a self but act with an intellectual awareness of emotion, feeling, and human interiority and do so to participate in the project of human self-relations.
This is a new technology of self in such that aNPCs could allows for more-than human relationships on the basis ofintellectual understanding. Relationships that are informed on the level of computational abstraction,?
Games are autofictions, like novels they are fictive states that blur the subject-object distinction, First person third person vulnerability,
>THIS IS THE FIRST WAY TO ADDRESS THE SELFLESS SELF, AS AUTOFICTION, A RELATIONSHIP ONE HAS WITH A GAME IS MUCH LIKE WHAT ONE HAS WITH A NOVEL, IT IS A RELATIONSHIP WITH ONE’S OWN SELF WITH AN INCREASED DEGREE OF INTERACTION.
- The distinction between existential understanding and “intellectual” understanding of emotion, feeling, and existence affords the opportunity to build agents in terms of abstract, mathematical representations of these states. Machine learning is particularly suited to this challenge: It can discover the structural logic of understanding and thereby empower agents to “understand.”
- It is absolutely possible to have abstract, mathematical representations of emotional (existential) states and build agents to intellectually understand us.
- Machine learning is particularly suited to solve the challenge: It can discover the structural logic of understanding and thereby empower veebees to “understand.”
AGENTIAL BEHAVIOR MODELED AFTER RECOVERY FROM RANDOM PERTURBATION
AN ANALOGY FROM PHYSICS ENGINE TRAINING TO AGENTIAL TRAINING
-What is the path/roadmap to developing this interaction?
-Make an analogy of the development timeline for recalcitrance to the ability of models in simulation environments to recover from being thrown off. How to demonstrate distabilization in your interactions with an LLM!?
Massively multi player environments like Fortnite are ripe for non-human interaction.
The atomization of self is the process by which individuals are increasingly isolated and disconnected from one another. This can be seen in the rise of individualism, the decline of community, and the increasing reliance on technology for social interaction. Individualism is the belief that the individual is the most important unit of society. This can lead to people feeling isolated and alone, as they may not feel like they belong to any group or community.
The decline of community is another factor that contributes to the atomization of self. Communities provide a sense of belonging and support, but they are increasingly being eroded by factors such as urbanization and social media. Technology can also contribute to the atomization of self, as it can provide a way for people to interact with others without having to meet in person. This can lead to people feeling less connected to others and more isolated.
The atomization of self can have a number of negative consequences, including increased loneliness, depression, and anxiety. It can also make it more difficult for people to solve problems together and build a strong society. There are a number of things that can be done to address the atomization of self, such as promoting community, encouraging face-to-face interaction, and using technology in a way that enhances social connection.
HUD: MOVING BEYOND THE ATOMIZATION OF SELF
>A COMPLETELY DIFFERENT WAY TO HAVE SELFLESS SELF-RELATIONS, THE SECOND PATHWAY. PURE SELF
- aNPCs approaching the real world.
- Simulation design, training, highlight AR/VR, etc…
- Ways for Agency to escape fictive and quasi-fictive states to address the problem of relations with other-selves.
The heads up display is a crucial design element in FPS games. It can often be user defined, modular, and TK. It is a way to equipt oneself with information, procedural decision trees, tools and externalized sensory impulses.
What if this kind of meaningful relationship is only possible with a distributed externalized selfhood. Where one sees and experiences agentialNPCs not as beings, but as atomizations of the self.
If the foundational condition of relation is self-relation (relation of the self with the self) and agents are selfless, then it follows that an agent cannot exist as another with which one can have a relation (the agent does not exist to be aware of themselves and better themselves). Rather, the agenct participates in the project of human self-relation. In this scenario, the agent is, quite literally, a technology of the self.
Interoperable procedures relayed through interfaces not bodies, where behavior is the MATTER not an interaction with a body…
An environmental self-dialogue one enters through the interfaces that surround them. The whole of a human self’s interiority projected back from the exterior physical world.
- WHAT IS A WAY TO TEST THESE IDEAS?
-Yuk Hui
-NPCs as interoperable procedures not bodies
-NPCs as the relations between nodes not the nodes themself
-dissolving the suspense between beings and things
-the behavior as the MATTER not the object
-proposed solution to the “selfless self” as core component of veebee work/conceptual position
JUICED: TUNING GAMING INTERACTION DESIGN THROUGH MULTI-MODAL RLHF
>ZOOMING OUT TO SEE THE BROADER IMPLICATIONS FOR AI DEVELOPMENT THROUGH ETHICS, POLICY, BEHAVIORAL RLHF
- WHERE ELSE THIS CAN GO
In gaming, "juice" is a term that refers to constant and bountiful user feedback. A juicy game element will bounce, wiggle, squirt, and make a little noise when you touch it. A juicy game feels alive and responds to everything you do. "Juicing" is a term that refers to taking a game that works and adding layers of satisfaction to improve game feel: the intangible, tactile sensation experienced when interacting with video games.
How to create characters that expand beyond the network of the fiction they are embedded in?
How can this behavioral capture establish new benchmarks for reinforcement learning through human feedback?
Currently, user behavior is tracked through interaction with websites. This creates data profiles of individuals that include preferences, demographic data and an individual history.
With the help of generative AI, we will see the rise of smart agents or agential NPCs everywhere. Service-based knowledge work that trades in expertise through language (teaching, caring, consulting, advising, therapy) will be replace or supplemented through LLM-powered agents.
What does the future of user tracking look like? Interactions with these smart agents will be tracked in order to generate user profiles. Similar to how we currently track user engagement and hover states in websites, smart agents will track data connected to the interaction: what emotional and communicative properties a user is most responsive too, how s/he responds to particular language patterns (think NLP), what personality traits (think Myers Briggs) trigger a particular response such as sympathy, collaborativeness or openness.
In-game aNPCs already have a memory log of user interaction in-game. This presents a huge opportunity for collecting user data.
Exopersonality is an NFT that saves user meta-data collected through aNPC interactions. This NFT helps aNPCs to adapt to a users personality, emotional needs in order to improve service quality and rapport and reduce friction of interaction.
Through using a zero knowledge protocol, user privacy is guaranteed. Human users can select which type of meta-data is collected and exchanged with what apps, services and games.
CASE STUDY:
THE CAR AS MODEL ORGANISM
>When non-human agency enters the physical world through robotics paradigms, An open example for how this vocation of “rethinking HCI through gaming environments” can manifest in the real world.
What does it mean to rethink agency beyond the confines of a virtual closed game? Lets take the introduction of autonomous driving cars into american urban life as a starting point. Cars that have been subjected to RL-trained data sets are usually trained for either safety or performance. What if they were instead given agencial qualities and were trained on relational interactions? Euro Truck Simulator 2 provides the perfect environment for this experiment.