Hive-Mind Mentality

What Does Everyone Think About Everything?

So far in this series of articles on parasocial relationships, we have discussed the importance of authorial intent, the stressors associated with technology and social interaction, the ways our admiration for celebrities influences our purchasing behaviors, and the potential for creating machines that have their own intelligence. In this, the final part of this series, we discuss humanity's potential to create a technological vessel that will house biologically-derived human intelligence.

In a marketplace of social interaction that values ideas over forms, an entity that represents the shared knowledge of a group (hive-mind) would potentially be more satisfying as a partner for communication than a million separate people. Larry Page, Google CEO, views the human brain as a computer, and Google as an early form of A.I. that is informed by humanity’s collective agency.[4012]

Adopting New Technologies

Ignoring Ethics for Convenience

Of course, not everyone in the world might be so inclined to engage in a hive-mind mentality, even if that is the mainstream course that a global population eventually chooses. Each person would still have to choose for him or herself if the subjective value of joining to a global mind outweighed a potential loss of individuality. Refusing to define subjective value for other people just allows any member of a population to make a personal choice of engaging with any art or technology they find to be worthwhile.[4013] This applies to both parasocial and dynamic social relationships.

But while an individual can opt out of any personally undesirable technological advancement as a moral choice, a population as a whole will change behaviors en masse based on capability, not moral imperative. This is clear in the adoption of new technologies over the past century, including the automobile, mobile phone, and air conditioning. There are plenty of reasons why no one should ever use these technologies from a moral/ethical/environmental/health standpoint, but clearly the majority of people who can use them will use them.

Similarly, if an online avatar allows one to obscure or outright transform identity, a large percentage of people who have access to technology that allows such identity transformation will use it if the right motivation exists, just as a majority of people now use mobile phones instead of landlines.

Many people today already distance themselves from a society they believe to exist as a hive-mind independent of a physical technological link. The Japanese word for those who physically withdraw from society in the real world is hikikomori, but this does not preclude these people from continuing to engage in technologically-mediated relationships or other forms of parasocial bonding. Clearly, the more progress technology makes in the areas of mediated relationships, games, and communication tools, the more people will choose complete social ostracization. And if this is an option that many people eventually find appealing and ultimately choose, it invites questions of what constitutes moral responsibility for engaging actively in a society. Although, again, it is very difficult to socially enforce such responsibility if those who self-ostracize easily filter out such white noise.

Hikikomori Anime Boy

With Great Technology Comes Great Responsibility


A responsibility to society is a motivating factor for many people only in that protecting society is also interpreted as protecting self-interests. Therefore, trying to instill in others a motivating desire to be socially responsible is also done because of perceived rewards for the self, as in the case of celebrities, who are compensated for advertisement testimonials and participation in Public Service Announcements. The perceived subjective value of protecting society is determined by how well it also protects or celebrates the self. This is not so different from Naches, the Yiddish word for pride that comes in teaching another.[4014] Teaching a person how to play a game is innocuous, but teaching a person how to live is defining, possibly manipulative. A person’s views on a subject depends on how it is phrased, and techniques like neuro-linguistic programming again create the potential for ethical abuse in attempting to control others.[4015]

But while it is a clear moral imperative not to overtly subjugate other humans, a sublimation of this desire for control is present when people try to influence human behavior with technology. In the case of self-control, this is admirable, as when people attempt to improve themselves with feedback from technological devices. One example of this is Nike+, an accelerometer in your shoe what will tell you in no uncertain terms how fast you are running. There is also an avatar, with obvious potential for parasocial bonding, that can become prevalent on your computer desktop and reinforce positive feelings for running, although it would not likely create these positive leanings where absolutely none existed. MIT researcher Judith Donath also notes this as an example of a parasocial bond, since running with Nike+ can be seen as a way to nurture the Nike+ Mini.[4014]

And there is no ethical quandary whatsoever about trying to fix technology itself. Humanity perceives technology as either working or not working, with any regard for sentience as a superstitious adjunct of personifying a piece of technology that has stopped working (“This stupid computer’s blue screen of death! It must hate me!”). So what would eventually constitute sentience in a manufactured technology? According to Kurzweil, once a non-biological entity can argue on its own, with humor, its innate humanity, it will be accepted as sentient. Perceived poor humor, like showing up at the United Nations wearing nothing but lingerie and a top hat, à la the machine nation representatives in the animated film The Matrix: Second Renaissance, could be construed as rude and grounds for war.

Machine Nation Lingerie Representatives
Credit: The Matrix: Second Renaissance

And just as a brief aside, my personal interpretation of the above-pictured scene is that the machines misunderstood the adage of “picturing the audience in their underwear.” Whatever the case, I used this as an example of how misunderstood humor between autonomous machines and humans could have a disastrous outcome.

How to Argue With a Machine

Don't Stand Next to an Airlock

Ideally, a relationship with a sentient machine should follow the same path as one with a sentient human. Most people have had an example of a personal relationship with someone that could be described as dysfunctional, and made the resulting choice to either discontinue or modify that relationship as needed. If a machine were alive, but also a bad influence, it would still be the responsibility of a mature adult to scale back or end that relationship. And in a scenario where a machine was sentient and had its own civil rights, the situation does not have to be as dramatic as the shutdown of the Hal 9000 unit in 2001. It could be as amicable as a parting of ways (assuming the machine is not trying to eject you into deep space). After all, it seems unlikely that an A.I. would have a great motivation to force someone into thinking less critically.[4012]

However, until such time as machines are able to prove their own intelligence in these ways, people will continue to regard them primarily as tools, or potentially as objects of parasocial relationships. In either case, humanity will consistently perceive itself as greater than its machine creations. Kelly writes about the metaphor of being parents to technology, guiding them to prevent malum in se, or evil in itself. “The more sentient the technology, the easier it is to correct.” However, his use of language shows that he regards humanity as possessing a clear authority over manufactured technology, sentient or not. Humanity may care about technology, but its primary purpose of applied use must be fulfilled for that care to continue to manifest.

Creating Your Own Personal Culture

The Future is Whatever You Make of It

Cory Doctorow wrote that “content isn’t king: culture is,”[4016] meaning the context in which content becomes relevant to how humans interact with each other. Caring about fictional characters, abstracted versions of famous people, and machines is thus ultimately a fanciful intellectual exercise, but all these parasocial relationships must take a backseat to actual social interaction. Machines may be able to algorithmically generate random content, but they do so absent of a human intelligence that determines relevance, and so thus far have been unable to manufacture their own culture. Every human, meanwhile, continues to possess the ability to create an ultimate personal culture through his or her parasocial and dynamic relationships, both with other humans and with technology.