Understanding Mark V. Shaney
The year was 1984. The Internet was still for the few, but unstoppably growing from a base of academia and large high-tech companies. The Web did not exist yet, instead there was Usenet. Thousands upon thousands of discussion groups in a tree-shaped hierarchy. Buzzing with activity. It was here you found all the exciting stuff.
Wednesday September 12th, Mark V. Shaney sent his first Usenet post. It was to the open net.singles group, and it contained this classic:
Mark V. Shaney
12-Sep-84 16:51:03 EDT
[...]
When I meet someone on a professional basis, I want them to shave their arms. While at a conference a few weeks back, I spent an interesting evening with a grain of salt. I wouldn't take them seriously! This brings me back to the brash people who dare others to do so or not. I love a good flame argument, probably more than anyone...
[...]
These confused ramblings must have puzzled the denizens of net.singles. What they didnt know was, that Mark V. Shaney in reality was the program shaney.
To be fair, it took them less than a week to guess it.
Rich Rosen
18/09/1984 22:13:31 UTC
It's interesting that composing an article from random segments of other articles thrown together (the Brian Eno method of article writing) produces such fascinating results.
[...]
Thank you so much, Mr. Non-artificial Non-intelligence Non-project!
[...]
Mark V. Shaneys postings continued for approximately one year. They became part of the groups chatter. To some they were noise that could be filtered out, to others they were highly appreciated. Some of the net.singles denizens called for the programmer to come forward, but shaney never admitted to being a program. In fact it denied it multiple times, in its own messy and clearly artificial way.
Then, in June 1989, A. K. Dewdney set the record straight by writing about Mark V. Shaney in his Scientific American column Computer Recreations. shaney had been developed by Bruce Ellis and Rob Pike as a prank. They had Internet access through Bell Labs, where they worked.
Dewdney explained that the name Mark V. Shaney was a pun name for "Markov chain", which was the main algorithm used by shaney. It had no language parsing code, shaney just learned the language by reading Usenet posts. That was why its output looked like the posts in net.singles. A messed up version of the posts for sure, but clearly belonging in that group.
Relating to computers
In the late 1990s psychologists started noticing how we relate to computers much in the same ways we do to humans. The more they looked into it, the more this view was substantiated.
Politeness experiment. This type of experiment was pioneered by Clifford Nass.
Task 1: The test subject is placed at computer A and answers a multiple-choice test.
Task 2: The test subject is then asked about the performance of computer A.
Task 2 is done either
- on computer A,
- on computer B (has same specs as A), or
- with paper and pencil.
These experiments have consistently shown an interesting difference in the answers given in task 2. Those given to computer A are significantly more positive than the others.
The test subjects fully understood that computers dont have feelings. It is completely irrational to be polite to them. But on a deeper level they couldnt help themselves being polite to computer A.
Reciprocity experiment.
Task 1: The test subject is placed at computer A, and uses it to search for information.
Task 2: The test subject then calibrates the colors of a computer screen.
The outcome of task 1 can be either
- useful, or
- not at all useful.
The color calibration can be done either
- on computer A, or
- on computer B (has same specs as A).
The test subject ends the calibration when the colors are "good enough".
These experiments have shown clear evidence of reciprocity. When useful information was given in task 1, the calibrations done on computer A used more comparisons and were more accurate, than those done on computer B. The experiments even showed a "revenge" effect. When the information given in task 1 was not at all useful, the calibrations done on computer A used fewer comparisons, than those done on computer B.
The test subjects fully understood that reciprocity towards computers does not make any sense. But on a deeper level they couldnt help showing reciprocity towards computer A.
Notice how the computers in both experiments sent a minimum of anthropomorphic signals. In the first experiment, the interface was just text based.
If the computer sends stronger anthropomorphic signals, for example by using speech, the social relation becomes stronger, which really surprises no one.
The findings have been shown to go even further. We also cannot help forming social relations with chatbots. This type of relation becomes much stronger, but also seems to be more complicated. It is something that is being actively researched these years. So I will not go into that.
There is a story Rob Pike has told, about a USENIX conference he once attended. A group of net.singles denizens that attended it, were going to an in-person dinner, and someone asked the Bell Labs guys if Mark V. Shaney attended the conference and would like to join. As we know, they were fully aware that Mark V. Shaney was a program, so their request was highly illogical.
Here it is worth mentioning that many net.singles denizens really liked the Mark V. Shaney posts. So it is probably safe to say that they on some level had formed social relations with the program, and thought it would be nice if Bell Labs somehow could flesh out that relation. It was very illogical yes, but also very human.
Error Correction
Take a closer look at what Mark V. Shaney wrote:
While at a conference a few weeks back, I spent an interesting evening with a grain of salt. I wouldn't take them seriously!
What is "them" referring to here? The grain of salt mentioned in the previous sentence is singular, but "them" is plural. So maybe it is to the conference attendees, maybe it is the conference organizers. If so, shouldnt it say "couldnt" instead of "wouldnt"? Maybe it is to grains of salt in general. It is anyones guess. But we are looking for something that isnt there, because the shaney program couldnt have had anything in mind when it wrote it.
Back when you read it in the beginning of the article, chances are you barely noticed this. Your mind auto-corrected it so it got assigned some reasonable meaning. It isnt an important part of the post anyway. So your mind handled it in a fine and effective way.
If you are very careful, you will notice that text, that hasnt been proofread, contains lots of errors. Both syntactical and semantical. Even text you write yourself contains errors. And this applies even more to spoken language. We are constantly running into communication errors that we barely notice.
Our auto-correction saves us from stopping and correcting each other all the time. It has been like this for ages. If you think about it, we must have had auto-correction at least since we started using spoken language.
But auto-correction can also create problems now and then. Like when not hearing what is being said, or substituting one meaning for another.
Or making sense of the output from a chatbot that we are socially attached to.
Summing up
With LLMs, or AI in mediaspeak, there seems to be a major new announcement every week... now AI scores higher than children in this test, now AI is better than students doing that task, now AI can do some work better than the workers, et cetera. Some people love it, some hate it. It is all so very emotional. And of course it is, it has to be.
LLMs are far more sophisticated than shaney, and even the most subdued of them are sending more antropomorphic signals. So our social attachment to them are likewise much stronger than that those net.singles people at the conference had.
Because we are so emotional about LLMs, we are terrible at assessing their achievements. We are by nature not clear-headed when thinking about them. When we hear a sensational AI claim, we should demand objective quality measures that support it. Otherwise we are just taken for a ride.
This turned into one of my longer articles. Some of the readers that got this far, will now sit back and say: "Thats all fine, but Im rational, so Im not affected. Especially now that I know about it." That even could have been me, when I started looking into this. But be careful not to fool yourself. You are rational, yes, but not only rational.


