Thursday, October 8, 2009

Suspension of disbelief

While at a play recently, a friend of mine pointed out to me a few things which he said didn't suspend his disbelief. In most artistic media, suspension of disbelief is important it either proves that things are real, or makes them seem as though they are possible. It makes stories more compelling because we get more enravelled in them; we learn to accept the stories' worlds as our own, which makes identifying with the characters easier.

Nowadays, however, it is very difficult to suspend disbelief. Many of us know so much trivial information that it sort of ruins our acceptance of a story if it includes something which contradicts those facts. In addition, things are always so focused on being as accurate and detailed as possible that we can't help but build up this mental database of trivia. And, on the most part, it's not entirely a bad thing to know a lot of different things. We value things are are depicted very accurately, and rightly so. This clears up a lot of biases that people have towards things, which can be a great boon.

In the realm of artistic expression, however, this isn't entirely helpful. In order to make things very believable, to suspend the belief of the audience, the artist has to really do his homework. I mean that he has to research in depth all aspects of his creation and reflect all of this in his work. That's not to say that it's a bad thing, but putting so much work into a secondary aspect of a project really does detract from the primary purpose of one's creation.

My friends and I are the type of people to go into a sci-fi movie and whisper to each other about how certain things aren'y physically possible, or about how things aren't accurately portrayed on-screen. I have gone to some historically fictive films and made a list of things that were historically inaccurate. Then I've separated those inaccuracies based on whether they were essential to the plot or not. I may be a geek - and a proud one, I might add - but I know that I am nowhere near alone in noting these types of discrepencies, though I will admit I'm one of the few to actually jot them down. I feel like this is a product of our well-educated and information-ready society.

Think, if you will, to about thirty-five years ago. A pleothora of horrible horror movies existed which were much more effective than today's, despite today's flicks having much nicer computer-generated graphics and more realistic make-up and stuff. As great as the original Star Wars movies are, I still turn away from the horribly choreographed sword-fights. On the other hand, I just saw the second Transformers movie. It was a ridiculously bad movie, but the graphics were beautiful and worth seeing. Again, I know that I am not alone in this regard.

Regardless, the times, they are a-changin'! The damage is done, and so now, we can only hope to move forwards. I realize that not every movie or television series can be as accurate and intense as Eureka or Battlestar Galactica. But, hey, Lost tries, and despite its occasional fact-bending, it's not so bad overall, not to mention that it's a great series.

Yes, I may be to blame for my lack of tolerance of incorrect facts and inconsistencies, but I'm sure that the number of people who agree with me are growing. And really, is it so hard to get things that right? But at the same time, I definitely think we need to learn to look past this kind of stuff a little more, so that we can appreciate something for its message while artistic media tries to catch up with our demand for high quality. I can definitely appreciate the fact that it gets very tedious and annoying, and requires quite a lot of work, but it's worth the effort in order to have a much more convincing world in place. The more convincing the picture, the more likely we are to see the artist's message without getting distracted.

Unless, of course, the point is the distraction itself.

Saturday, October 3, 2009

Of languages, computer and human

I noticed an interesting difference between humans and computers the other day. I was conversing with a fellow linguist about language and technology and I sort of stumbled upon it.

The progress of language is the same as the progress of computers, but their sources are reversed.

In the days of the early computers, we had vacuum tubes and punch cards for data storage. In order to work, one used a terminal (one of many) connected to an always-running computer. Most everything was done in RAM. Then along came something wonderful: magnetic storage. Now you could much more efficiently store data to be accessed and modified later. As with all things, however, there a few obvious problems. One was capacity and the other was price. Price went down over time, and personal computers became more and more common.

Capacity was another issue. In order to save as much space as possible (i.e. maximize available capacity) and to keep the price down, interfaces didn't progress all that much for a while. Text-based command-line input did wonders, but it proved to be hard for many people to adapt to. There was a significant learning curve. However, the efficiency of these systems was such that they still exist and are in use today. And, the more that commands were predictable and formulaic, the easier things got.

Then, along came a concept called GUI - Graphical User Interface. As storage capacity increased and price went down further, it was much more feasible to run an interface that was easier to use at the expense of it being not as direct or efficient. The reason this trade-off really was important was because it allowed access to not just those familiar with computers, but to those who had never even touched one before. It allowed access to "outsiders," those who weren't a member of the computer-based community. The learning curve dropped. Progress, however, was even more dependent on the increase of storage capacity.

In a related note, as computers become dated, those who are more "tech-savvy" often return to more efficient operating systems to run on their older hardware. Linux is a favorite for many. The reason for this is that you can run new software on these older machines and still have them be usable by decreasing bloat. I, myself, turned an older computer into a server devoid of any GUI. Without the "bloat" of a GUI, it still remains very useful and usable for many things. And, I won't be as affected by software deprecation as I would be if I had left an old operating system running that wasn't updated anymore.

You can see how the progress of personal computers was based on simplification of the user interface, so that others could use them more efficiently, even if they weren't great with the technology. Accessibility came at the cost of dependence on storage. As storage became available and higher and higher capacities, process efficiency became less important for the average user (think today's average user, not 1980's average user.)

The progress of language also works towards increasing accessibility.

Long ago, language was a very difficult thing to grasp. This seems counter-intuitive because language is so fundamental, but it's easier to see when you look towards the number of people who were bi-, tri-, or multi-lingual. Much lower than now. This is because of many reasons, such as the fact that globalism wasn't as high as it is now. This can be easily seen by the biases seen in the Western world and that of early Sanskritic society in India. Greeks took pride in their language, so much so that they deemed anyone who could not speak it to be uncivilized. This definitely carries through time by the concept of "The White Man's Burden." Of course, that particular example is not based solely on language, but I've always found that language and culture tie together so intimately that they almost certainly go together. This is especially true when analyzing one's cultural identity.

When we look to the east, however, we find that multi-lingualism increases. This is in no small way based on the silk routes from the Middle East, through India, to China. The advantage of being multi-lingual is multi-faceted, especially regarding the business world. It was also important because the Middle East, India, and China all retained many subcultures, each with their own language. India today has over 20 official languages, not to mention the many "dialects" of China (most Westerners only know of Cantonese and Mandarin, but there are many more).

Back to the point, another important reason for the reason that multi-lingualism was difficult was that languages had different characteristics than seen those of today. The largest spoken language family is that of the Indo-European branch. These languages were originally highly inflected and word order mattered less. This means that words had many, many different forms based on their use in a sentence. Verbs had many more conjugations than we often see today, and their associated nominal usages also required lots of rearranging. Latin, Greek, and Sanskrit are primary examples here. Lots of word forms and usages. The benefit to this was that it was easier (in many ways) to convey meaning. On the whole, though definitely not always by any means, one could convey more precise information by using fewer words. This was because word endings conveyed the meanings better.

[Important and notable exceptions to this rule are languages in the Sino-Tibetan family, such as the Chinese languages, and other such languages that were not directly or immediately related to or in contact with Indo-European languages.]

Now, growing up and speaking these languages was one thing, and learning them was quite another. Unless you grew up in a place that spoke more than one language, you wouldn't necessarily learn the other language until you were an adult, and learning languages becomes significantly harder after your teens. As a result, speaking more than one language fluently was rarer (moreso in the West, as I stated before) than it is today.

Went you look at an inventory of words of these languages, you may notice that each verb root has many, many different variations, which may or may not be formulaic. You could consider these highly inflected languages to be more "storage" based than Chinese languages or today's languages. As time went on, languages diversified, but started being focused less on nominal cases, simplifying verb tenses and conjugation-groups, and started focusing more on word-order. As a result, you learned more differentiated words and fewer eccentric morphological endings. Now, if you learned fewer words on the whole, you could still convey meaning, albeit with more words in each sentence. Less efficient but much easier to learn for those who weren't so great with languages. Multi-lingualism just got a whole lot easier. Ignore my last statement, because this is slow, steady progress over years and years, but you can understand how and why things changed.

Orthography is important, but in my opinion, didn't matter as much to the average person until the Arabic empire rose to its height. This was the era of copying and preserving, leading up to the invention of the printing press by Gutenburg. Prior to that, writing was important, but not so integral to the learning of language, especially if that language was a second, third, or fourth one.

Another benefit to more formulaic language is that it frees up time to think more abstractly. Language becomes less of a pure inventory, so we're free to remember more. It also requires less attention because we can always add more words to alleviate ambiguity later. This was always true, but is much more apparent now. At least, so I've noticed. Anyway, this way, we're free to multitask better.

So, here we can see a definite change towards accessibility at the (relatively slight) cost of efficiency. However, you may, as I have, noticed a few important differences between this and the progress of computers.

Computers moved from always-on, process-based centralized systems towards individual computers that were more easily accessible because of the presence and development of ever-increasing storage. Language moved away from pure "storage" towards more formulaic usage. It became more "process-friendly" in a way, and this is definitely true when we do consider the entrance of orthography to the mix. Knowledge can be stored and accessed later, but the process of learning (how to read especially) becomes elevated.

The thing about the older languages is that in their earliest forms (Mycenaean Greek, Vedic Sanskrit, and Old Latin), many of these inflections weren't so standardized. There were more exceptions to the rule, and these tended to decrease as time went on, much like the command-line based systems whose predictability eventually became nigh-universal. Another similarity is the context of these. As Latin, Greek, and Sanskrit became liturgical languages, they were standardized much as linux commands were alongside the rising use of Windows and Mac OS. Latin and Greek became used specifically for scientific naming, as well, as Unix and Linux are arguable defaults for high-end servers.




As technology tries to break the limitation of today's magnetic storage abilities, we should take some time and think about our language. In light of today's post, I challenge you to take the time to read, write, and speak more efficiently, at the cost of speed and time. I guarantee that if you do this for a while, you will gain something from the simple act of moving a little more slowly, along the lines of "Ungeek to Live."