Tuesday, November 17, 2009

The Internet and driving

While witnessing a well-warranted and funny episode of road rage today, it suddenly occurred to me how similar surfing on the Internet is to driving.

Every intersection is akin to a forum or a chatroom, where there's the possibility that someone will oblige you with a nicety, but more (or less, if you live among kinder folk) likely, you'll see someone not let you in, or someone will block you, or cut you off, etc. Visiting destinations is like visiting points of interest. Road-tripping is very akin to online collaboration, the likes of which will increase thanks to Google Wave. In the big picture, the main similarity is this: just like how people often forget that the people who post on forums and in chatrooms are actual people, so do drivers forget that people in other vehicles are people. People aren't perfect and they make mistakes.

On the net, nothing's THAT important that we should forget that other people are actually sitting in front of computers on the other end of those conversations. If you acknowledge their humanity, you end up being less misanthropic and angry towards them, or so I see it.

Driving is a bit different because if someone makes a big enough mistake, then it becomes a life or death situation. Does that truly justify our anger, or are we just pompous about how we drive? We can't possibly expect everyone to know we're in a rush. And, if this kind of anger is justified, the what justifies our anger on the net, the situation being so far from dire?

Everyone makes mistakes, right?

Thursday, October 8, 2009

Suspension of disbelief

While at a play recently, a friend of mine pointed out to me a few things which he said didn't suspend his disbelief. In most artistic media, suspension of disbelief is important it either proves that things are real, or makes them seem as though they are possible. It makes stories more compelling because we get more enravelled in them; we learn to accept the stories' worlds as our own, which makes identifying with the characters easier.

Nowadays, however, it is very difficult to suspend disbelief. Many of us know so much trivial information that it sort of ruins our acceptance of a story if it includes something which contradicts those facts. In addition, things are always so focused on being as accurate and detailed as possible that we can't help but build up this mental database of trivia. And, on the most part, it's not entirely a bad thing to know a lot of different things. We value things are are depicted very accurately, and rightly so. This clears up a lot of biases that people have towards things, which can be a great boon.

In the realm of artistic expression, however, this isn't entirely helpful. In order to make things very believable, to suspend the belief of the audience, the artist has to really do his homework. I mean that he has to research in depth all aspects of his creation and reflect all of this in his work. That's not to say that it's a bad thing, but putting so much work into a secondary aspect of a project really does detract from the primary purpose of one's creation.

My friends and I are the type of people to go into a sci-fi movie and whisper to each other about how certain things aren'y physically possible, or about how things aren't accurately portrayed on-screen. I have gone to some historically fictive films and made a list of things that were historically inaccurate. Then I've separated those inaccuracies based on whether they were essential to the plot or not. I may be a geek - and a proud one, I might add - but I know that I am nowhere near alone in noting these types of discrepencies, though I will admit I'm one of the few to actually jot them down. I feel like this is a product of our well-educated and information-ready society.

Think, if you will, to about thirty-five years ago. A pleothora of horrible horror movies existed which were much more effective than today's, despite today's flicks having much nicer computer-generated graphics and more realistic make-up and stuff. As great as the original Star Wars movies are, I still turn away from the horribly choreographed sword-fights. On the other hand, I just saw the second Transformers movie. It was a ridiculously bad movie, but the graphics were beautiful and worth seeing. Again, I know that I am not alone in this regard.

Regardless, the times, they are a-changin'! The damage is done, and so now, we can only hope to move forwards. I realize that not every movie or television series can be as accurate and intense as Eureka or Battlestar Galactica. But, hey, Lost tries, and despite its occasional fact-bending, it's not so bad overall, not to mention that it's a great series.

Yes, I may be to blame for my lack of tolerance of incorrect facts and inconsistencies, but I'm sure that the number of people who agree with me are growing. And really, is it so hard to get things that right? But at the same time, I definitely think we need to learn to look past this kind of stuff a little more, so that we can appreciate something for its message while artistic media tries to catch up with our demand for high quality. I can definitely appreciate the fact that it gets very tedious and annoying, and requires quite a lot of work, but it's worth the effort in order to have a much more convincing world in place. The more convincing the picture, the more likely we are to see the artist's message without getting distracted.

Unless, of course, the point is the distraction itself.

Saturday, October 3, 2009

Of languages, computer and human

I noticed an interesting difference between humans and computers the other day. I was conversing with a fellow linguist about language and technology and I sort of stumbled upon it.

The progress of language is the same as the progress of computers, but their sources are reversed.

In the days of the early computers, we had vacuum tubes and punch cards for data storage. In order to work, one used a terminal (one of many) connected to an always-running computer. Most everything was done in RAM. Then along came something wonderful: magnetic storage. Now you could much more efficiently store data to be accessed and modified later. As with all things, however, there a few obvious problems. One was capacity and the other was price. Price went down over time, and personal computers became more and more common.

Capacity was another issue. In order to save as much space as possible (i.e. maximize available capacity) and to keep the price down, interfaces didn't progress all that much for a while. Text-based command-line input did wonders, but it proved to be hard for many people to adapt to. There was a significant learning curve. However, the efficiency of these systems was such that they still exist and are in use today. And, the more that commands were predictable and formulaic, the easier things got.

Then, along came a concept called GUI - Graphical User Interface. As storage capacity increased and price went down further, it was much more feasible to run an interface that was easier to use at the expense of it being not as direct or efficient. The reason this trade-off really was important was because it allowed access to not just those familiar with computers, but to those who had never even touched one before. It allowed access to "outsiders," those who weren't a member of the computer-based community. The learning curve dropped. Progress, however, was even more dependent on the increase of storage capacity.

In a related note, as computers become dated, those who are more "tech-savvy" often return to more efficient operating systems to run on their older hardware. Linux is a favorite for many. The reason for this is that you can run new software on these older machines and still have them be usable by decreasing bloat. I, myself, turned an older computer into a server devoid of any GUI. Without the "bloat" of a GUI, it still remains very useful and usable for many things. And, I won't be as affected by software deprecation as I would be if I had left an old operating system running that wasn't updated anymore.

You can see how the progress of personal computers was based on simplification of the user interface, so that others could use them more efficiently, even if they weren't great with the technology. Accessibility came at the cost of dependence on storage. As storage became available and higher and higher capacities, process efficiency became less important for the average user (think today's average user, not 1980's average user.)

The progress of language also works towards increasing accessibility.

Long ago, language was a very difficult thing to grasp. This seems counter-intuitive because language is so fundamental, but it's easier to see when you look towards the number of people who were bi-, tri-, or multi-lingual. Much lower than now. This is because of many reasons, such as the fact that globalism wasn't as high as it is now. This can be easily seen by the biases seen in the Western world and that of early Sanskritic society in India. Greeks took pride in their language, so much so that they deemed anyone who could not speak it to be uncivilized. This definitely carries through time by the concept of "The White Man's Burden." Of course, that particular example is not based solely on language, but I've always found that language and culture tie together so intimately that they almost certainly go together. This is especially true when analyzing one's cultural identity.

When we look to the east, however, we find that multi-lingualism increases. This is in no small way based on the silk routes from the Middle East, through India, to China. The advantage of being multi-lingual is multi-faceted, especially regarding the business world. It was also important because the Middle East, India, and China all retained many subcultures, each with their own language. India today has over 20 official languages, not to mention the many "dialects" of China (most Westerners only know of Cantonese and Mandarin, but there are many more).

Back to the point, another important reason for the reason that multi-lingualism was difficult was that languages had different characteristics than seen those of today. The largest spoken language family is that of the Indo-European branch. These languages were originally highly inflected and word order mattered less. This means that words had many, many different forms based on their use in a sentence. Verbs had many more conjugations than we often see today, and their associated nominal usages also required lots of rearranging. Latin, Greek, and Sanskrit are primary examples here. Lots of word forms and usages. The benefit to this was that it was easier (in many ways) to convey meaning. On the whole, though definitely not always by any means, one could convey more precise information by using fewer words. This was because word endings conveyed the meanings better.

[Important and notable exceptions to this rule are languages in the Sino-Tibetan family, such as the Chinese languages, and other such languages that were not directly or immediately related to or in contact with Indo-European languages.]

Now, growing up and speaking these languages was one thing, and learning them was quite another. Unless you grew up in a place that spoke more than one language, you wouldn't necessarily learn the other language until you were an adult, and learning languages becomes significantly harder after your teens. As a result, speaking more than one language fluently was rarer (moreso in the West, as I stated before) than it is today.

Went you look at an inventory of words of these languages, you may notice that each verb root has many, many different variations, which may or may not be formulaic. You could consider these highly inflected languages to be more "storage" based than Chinese languages or today's languages. As time went on, languages diversified, but started being focused less on nominal cases, simplifying verb tenses and conjugation-groups, and started focusing more on word-order. As a result, you learned more differentiated words and fewer eccentric morphological endings. Now, if you learned fewer words on the whole, you could still convey meaning, albeit with more words in each sentence. Less efficient but much easier to learn for those who weren't so great with languages. Multi-lingualism just got a whole lot easier. Ignore my last statement, because this is slow, steady progress over years and years, but you can understand how and why things changed.

Orthography is important, but in my opinion, didn't matter as much to the average person until the Arabic empire rose to its height. This was the era of copying and preserving, leading up to the invention of the printing press by Gutenburg. Prior to that, writing was important, but not so integral to the learning of language, especially if that language was a second, third, or fourth one.

Another benefit to more formulaic language is that it frees up time to think more abstractly. Language becomes less of a pure inventory, so we're free to remember more. It also requires less attention because we can always add more words to alleviate ambiguity later. This was always true, but is much more apparent now. At least, so I've noticed. Anyway, this way, we're free to multitask better.

So, here we can see a definite change towards accessibility at the (relatively slight) cost of efficiency. However, you may, as I have, noticed a few important differences between this and the progress of computers.

Computers moved from always-on, process-based centralized systems towards individual computers that were more easily accessible because of the presence and development of ever-increasing storage. Language moved away from pure "storage" towards more formulaic usage. It became more "process-friendly" in a way, and this is definitely true when we do consider the entrance of orthography to the mix. Knowledge can be stored and accessed later, but the process of learning (how to read especially) becomes elevated.

The thing about the older languages is that in their earliest forms (Mycenaean Greek, Vedic Sanskrit, and Old Latin), many of these inflections weren't so standardized. There were more exceptions to the rule, and these tended to decrease as time went on, much like the command-line based systems whose predictability eventually became nigh-universal. Another similarity is the context of these. As Latin, Greek, and Sanskrit became liturgical languages, they were standardized much as linux commands were alongside the rising use of Windows and Mac OS. Latin and Greek became used specifically for scientific naming, as well, as Unix and Linux are arguable defaults for high-end servers.




As technology tries to break the limitation of today's magnetic storage abilities, we should take some time and think about our language. In light of today's post, I challenge you to take the time to read, write, and speak more efficiently, at the cost of speed and time. I guarantee that if you do this for a while, you will gain something from the simple act of moving a little more slowly, along the lines of "Ungeek to Live."

Friday, July 17, 2009

Funniest Buddhism joke ever.

I think I just found the funniest Buddhism joke ever. It's pretty dark, but if you're Buddhist, you should be used to at least a little bit of nihilism. And, you probably shouldn't take offense to it. =P

Saturday Morning Breakfast Cartoons - #769

Monday, July 13, 2009

A long-misplaced analogy

Operating systems have changed a lot from when computers first fell into the mainstream. Movies like "Hackers" and "The Pirates of Silicon Valley" are such that, despite their errors and inconsistencies, still popularize computer-culture (as well as the computer-related counter-culture).

For those of you who are really familiar with computers, there are a variety of operating systems out there to make things more convenient for the end user. They also provide major headaches when things go wrong. As if you couldn't tell by the premise of this blog, religious paths can work the same way.


[Let's take a moment to understand that any copyrighted names of software or anything else doesn't belong to me, but to their actual owners, stated here or otherwise. It's the internet; I'm using names and very clearly not trying to steal them, so don't sue me.]


Lets take a look at Windows. Windows is, more or less, a defacto standard for both business and pleasure. Consequently, Windows also has a lot going for it in the way of software. This is especially true for people who can't find applications for their purposes; they end up programming for Windows because that's what they use. In the business world, it's fairly easy to see why Windows wins. The customer support is provided world-wide, and to date, I have not encountered more diverse, more easily accessible, or more integrated language support. You also have a MUCH wider variety in things like hardware components and software utilizing them.

On the downside, Windows gets to feel really bloated after only a few months of use. Sometimes, with such a variety of hardware and software (not to mention parsing through drivers), you'll stuff that acts up, stuff that doesn't work properly, and stuff that just disappears. Let's not forget the fact that being a defacto standard means that most forms of malware are designed to exploit any and all vulnerabilities in the software. Fixes only provide temporary relief. I won't even begin to discuss Windows' paltry CLI.

There's also Apple. Mac OS, especially from Tiger onwards, is really, really easy to learn and to use. Nearly all aspects of the OS are fully integrated with each other. iTunes, the iLife Suite, and other software solutions all work together very well. And, being based on FreeBSD means that those who like Unix can jump into the Darwin prompt and do things in the "old-fashioned" non-GUI way. There's also an almost non-existent threat of malware. Things in general run more smoothly because Apple decides what hardware is supported, and so there's less bloat in terms of drivers and such.

This approach has its drawbacks, however. There's MUCH less flexibility in the way of hardware configuration. This also means you're paying more for what you're getting (for hardware), and can't configure a comparable system for less because the software is effectively tied down to the hardware. There ARE those who have gone the route of "Hackintoshes," but the practice remains to be deemed legal, and there's a whole slew of problems on that route, as well. Things often don't work perfectly and you really have to know what you're doing to fix those problems on your own. Other drawbacks of the Mac-platform include sub-par language support, occasionally restrictive software choices, and being bound to specific companies based on hardware configuration. This last one is a big deal to some people, as Apple doesn't support ATI or AMD up-front (buying a card for your desktop would be a slightly different issue, in my opinion).

And, then there's Linux, or Unix, or Solaris, or what have you. These are really interesting OSes because they come from antiquity and have their roots there. While the other two platforms are almost completely GUI-oriented, with the *nix group, you have a choice in the matter. Underlying Linux is a Command-Line Interface that allows you to directly interact with the OS in a very different way from the GUI. If you choose, you can have the GUI instead, or you can have both. And, the commands you learn, or the programs you learn to use via CLI can be used on other systems and other distributions of Linux. It's based on "old-school" ideas of freedom ('free' as in 'beer' as well as 'free' as in 'speech'). The community surrounding each distribution, and the platform on the whole, is amazing and is very supportive. And, if you have old hardware that you don't want going to waste, Linux can breathe new life into it.

The drawbacks here are numerous, though. The end-user at home doesn't get much in the way of technical support, at least not like you can with Windows or Mac OS. You often have to resort to the community to get some help, which isn't so much a bad thing is it is a waiting game. Sometimes you go through a lot before you find a solution. Sometimes you don't get proper answers. Sometimes the first suggestion you get works. It's not always consistent. Even Ubuntu - often hailed as the most newbie-friendly disro - doesn't always behave the same on every system. If you're buying or building and you plan on running this type of OS, you have to do your homework to know what exactly will work easily, what needs work, and what won't work at all. There's also a steep learning curve, not so much because it's hard but because there's so much to learn, and it all depends on what you're running and trying to do. One consolation is that as you learn, things get much much easier to a point. Some of what you can do is truly amazing; you just have to figure out what you're doing. You can eventually play around with things and you can get a feel for what things do. If you don't develop that type of approach, though, you face a significant disadvantage.



Religious paths work similarly. You have those approaches which offer diversity and are easily accessible, or are very mainstream. However, they're not always "user-friendly." Sometimes you get a lot of contradictions. Sometimes they appear counter-intuitive.

The least outrageous comparison is that of the Linux/Unix type of OSes to mystical paths. On mystical paths, like Wicca, New Age paths, Neo-Paganism, Zen/Vipassana/Tibetan Buddhism, Yoga, or even the Hindu Monastic sects, the community is a very integral part of the learning process. The learning curve is not the easiest, and finding good sources for guidance is often a trial in and of itself. Still, you really get a feel for direct, intuitive interaction with the Divine. If you choose to have some visual or ritualistic approaches, then those are available as well. Mainstream aspects are often found as-is or adapted for mystical paths. And, sometimes things like meditation find their way into more mainstream paths as well.

The Windows of the religious traditions would most probably be faith-based paths. Easily accessible, nearly universal in its approach, and at this point, a de facto standard for the approach to Divinity. At the same time, it has its flaws. It's much harder to preach faith to a group when you have some "free-thinkers" in the bunch. I'm not saying it's not possible, but it definitely requires significant effort and time to reach out to them, as opposed to those who take to faith unquestioningly. On the other hand, usually you get a high-volume of people who take to your "platform." Religions that fall into this category are mostly mainstream, like most Christian denominations, Islam and Judaism to a large extent, some 'Hindu' sects, like Swaminarayanism, and many MahaayaaNa Buddhism schools, such as Pureland.

There aren't many "Macs" in the slew of religions in the world. I'd argue that mainstream Hinduism falls into this category, as does Sufiism, some Judaic traditions (the study of Kabbalah comes to mind), and Eastern Orthodox Christianity. Hinduism is very Mac-like because of the openness inherent in its paths. The Bhagavad Giitaa itself affirms the karma-yoga, jn~aana-yoga, raaja-yoga, and bhakti-yoga, so it has its easy-to-learn side, but also the tinker-under-the-hood side. Most schools generally agree that Sufiism is technically an offshoot of Islam, so as such, you are required to obey the five pillars of Islam. But, Sufiism also offers a mystical approach that's distinctly different from the Linux-like religions. Asceticism is kept to a minimum, and Sufis are encouraged to have families and marry, and to have jobs and contribute to the community. Instead of seeing those aspects as a hindrance to their path, they value the lessons learned from them and relish the added difficulty they provide. The Eastern Orthodoxy's approach is slightly different, with lay people encouraged to take spiritual, meditative retreats, and under proper guidance, use mystical means to add to their religious experiences. In contrast, though, you're definitely stuck in terms of "hardware." You'd be hard-pressed to dedicate yourself to most Sufi schools without converting to Islam. In order to properly understand the teachings of the Kabbalah, it's usually understood that not only should you be aware of the Judaic tradition, but also have Rabbinical training. Eastern Orthodoxy and Hinduism also have their baggage as well.


I'm not saying that one's better or worse. Ultimately, technology should work based on your needs. I feel that religion should, too. The trick is that while most of us can admit it to ourselves when we're indulging a little in a new computer, we don't always do the same with life as a whole. As such, we can't properly assess our needs.

It's a wise man who can differentiate between his needs and his wants, let alone choose to follow his need over his want.

Still, I think we should really try to think about religion as something that's meant to ultimately help us, even if we find ourselves restrained a little. It's also something that needs some sort of updating. We definitely face issues that we didn't face two-thousand-something years ago, and we also have knowledge we didn't have back then. Fixing bugs is important, and it also doesn't hurt to add new features.

Wednesday, April 29, 2009

King of the Hill and Buddhism

When I took Buddhist Philosophy at Rutgers, my professor played for us an episode of King of the Hill: Won't You Pimai Neighbor?

It's a great episode and a lot of interesting Buddhist philosophy can be found in it, but really it's the ending that does it. Bobby finds out that he's the reincarnation of a lama after a few monks come to visit his girlfriend, Connie's parents. Things go well for a time, while they wait for another monk to come and confirm the finding. That is, until Connie and Bobby find out that lama's are celibate monks and cannot have relationships. Bobby worries a lot and even considers trying to throw the test, but realizes he can't when Connie explains her feelings about her religion and tells him what would happen if he purposely failed.

It's a heartwarming episode. At the end, he's supposed to choose among the effects of the old lama. He chooses Connie, by pointing to her reflection in the mirror. After everyone leaves, one of the monks talks to the newly arrived head monk, pointing out:

MONK # 1: But that was Sanglug's mirror.
HEAD MONK: I know, but he didn't pick it.
MONK # 1: But he used it.
HEAD MONK: Tough call. But it's mine, and I made it.

One of the ideas behind this exchange is that Bobby passed the test, but did not choose to be a monk. His relationship with Connie in this life was more important to him, and to take him away from that would be unacceptable.

While this is not exclusively Buddhist in ideology, and while it's neither highly detailed nor completely (in)accurate, it is notable for the fact that it provides a tangible argument in an appealing way, and that it's set in modern times. It's a great episode and I highly recommend it. That kind of television is something else entirely.

Monday, April 27, 2009

Linguistic similarities in Arabic and Sanskrit vowels

I don't know if this belongs here, or in A Modern Hindu's Perspective, but since it's not directly religious in any way, and provides very interesting notes linguistically, and since linguistics has a major impact on modern technology (by way of voice recognition, sound analysis, and linguistic interpretation by machines), I figured it wouldn't hurt to throw it this way.

NOTE: Here, I use my slightly altered version of ITRANS for sanskrit (saMskRta) transliteration. I use the Buckwalter transliteration for arabic (Eraby). For the purposes of this post, my alterations to ITRANS are negligible. Also, I will describe the appearance of the ta$kyl so that people who are familiar with arabic phonetics and logography can follow along without worrying about the transcription. Also, while I've taken linguistic classes and studied both languages with native/polished speakers (one more than the other), I am in no way a linguist. Thus, I do my best to be as accurate as possible and give helpful links, but feel free to study both languages and compare yourself.

Also, if you have no idea what I just said, don't panic! Just read on and you can ignore that jazz.




Something interesting I noticed today was how arabic's vowels and sanskrit's vowels are similar. In arabic, you essentially have three tiers of vowels, dictated by length (traditionally, in terms of beats).

You have the long vowels (in order of strength): yA (/a/), waw (/w/), Alif (/A/). These are pronounced as "ee" in 'beet,' "oo" as in 'boom,' and "a" as in 'hat,' in standard American English. They are held for two beats.

You have the short vowels (same order as long vowels), the kasra, Dam~a, and fatHa. There is one phonetic difference here: the kasra is often pronounced like "i" as in 'bit,' but they do still correspond to the longer vowels. These short vowels are held for one beat.

Then, you have the hamzap (hamza). It represents the glottal stop, which most English speakers will recognize as the hyphen in 'uh-oh.' It's that short abruptness you cause when you close your throat. The hamzap in arabic has vowel quality associated with it. In writing, when this appears at the beginning of word, it is represented as an Alif with the hamzap under it (for the yA equivalent), an Alif with the hamzap and Dam~a over it (for the waw equivalent), and an Alif with the hamzap over it alone (for the Alif equivalent).

For English speakers, think of that teenage apathetic "I'm not interested" sounding "eh," except with the aforementioned vowel sounds and shorter.

These hamzap representations are held for a half of a beat in duration.

Now let's get to the sanskrit representation.

You have the long vowels (in a comparitive, not traditional, order): /ii/, /uu/, and /aa/. /ii/ is pronounced just like arabic yA and /uu/ is pronounced just like arabic waw. /aa/ is NOT pronounced like Alif, however; Alif is more frontal (remember, "a" as in 'hat'), but /aa/ is a little farther back. Think "a" as in 'far,' or the first "o" in 'October.' The long vowels are also held for two beats.

You have the short vowels (same comparitive order): /i/, /u/, and /a/. In sanskrit, /i/ and /u/ have the same quality as /ii/ and /uu/, but are just one beat in length. This changes for modern Indian languages, where the short versions end up sounding like "i" as in 'bit' and "u" as in 'put.' Also, /a/ has two schools of thought as to its pronunciation. In one, it's pronounced just like "aa" but held for one beat. The second, and more predominant school has /a/ pronounced like "u" as in 'bun,' and "o" as in 'done.' Here, too, it is held for one beat.

Then, you have two semivowels, /ya/ and /va/. /ya/ is a palatal semivowel and is associated with /i/ and /ii/ in sanskrit's system of sandhi (which documents phonetic assimilation). In vedic sanskrit, /va/ was pronounced like an English "w," but came to be pronounced like the English "v." However, it still remains the labial semivowel, related to /u/ and /uu/.

Let's say you have a sanskrit word, /karmaNi/ "actions." Then, you have another word after it, /eva/ "only." You put them together in a phrase and you get /karmaNi eva/. However, in sanskrit, you must apply sandhi, and the /i/ changes to the semivowel /ya/. You end up with one word, /karmanyeva/, which still means "only actions."

It's a little bit easier to say and if you were to say the two words in casual speech (read: quickly and not in a metrically significant way), you'd end up with this anyway. To steal the wikipedia article's example, think of the phrase "don't be" in English. You say it casually and quickly, it comes out as "dome be." It happens in a great deal of languages, and instead of forcing uncomfortable articulation (when you speak "properly" or formally), sanskrit accepts and documents it, and then "forces" you to apply those changes (you apply them when you speak "properly" or formally).

/karmaNyeva/'s semivowel conversion illustrates that /ya/ - and the "y" in English for that matter - is just a broadening and truncation of the vowel /i/, to which you can then give another vowel to. Sanskrit doesn't like consecutive vowels, unlike greek and latin (mostly greek). This is how it deals with them. But, getting to the point, this makes these semivowels /ya/ and /va/ like very short half-beat length vowels in their own way.



It's fun to see similar structures.


Now, in arabic, you have two diphthongs, /ay/ and /aw/. This is a fatHa (one beat length of Alif) with yA and waw, respectively.

Sanskrit has four diphthongs (merged from a few more), /e/ and /ai/, and /o/ and /au/. Of these, /ai/ and /au/ come to my mind. Originally, scholars think they may have been pronounced as /aa/+/ii/ and /aa/+/uu/, but in modern pronunciation (and perhaps as far back as classical sanskrit), they are pronounced as /a/+/i/ and /a/+/u/. Each of these two diphthongs are of two beats' length and are classified as "long" vowels in sanskrit. Nice parallel structure, eh?



I don't know how much you guys are familiar with linguistics and such, but this is pretty interesting to me because arabic is an Afro-Asiatic language, while sanskrit is an Indo-European language. The two languages are pretty distant in terms of linguistic geneology and the features mentioned here are old in both respective languages, implying that a much later borrowing of structure and phonemes did not occur; it's more likely that each language retained these features independently. Also, while I used sanskrit here, please note that the vowel structures and such are in use in modern Indian languages in general, though the use of sandhi has declined in favor of consecutive vowels.

You may also be interested in how sanskrit and avestan are related.