Thursday, November 01, 2007

Phobaphobia - let's just be afraid of everything.


In the process of doing some light research for a piece about fear of information and technology, I stumbled across a rather long list of phobias. I've decided to keep a record of those that I find intriguing, both linguistically and phenomenologically.

Apparently we have the potential to be afraid of just about anything. Moreover, your fears are not at all unique.
[the emphasis is for 'LinguisticFX' and general artiness]

Acousticophobia - the fear of noise (also phonophobia)
Agateophobia - the fear of insanity
Allodoxaphobia - the fear of opinions
Apeirophobia - the fear of infinity
Asymmetriphobia - the fear of asymmetrical things
Atelophobia - the fear of imperfection
Atomosophobia - the fear of atomic explosions
Atychiphobia - the fear of failure
Autophobia or Monophobia - the fear of being alone

Barophobia - the fear of gravity
Bibliophobia - the fear of books

Cacophobia - the fear of ugliness
Cancerophobia or Carcinophobia - the fear of cancer
Cardiophobia - the fear of the heart/heart disease
Catagelophobia or Katagelophobia - the fear of being ridiculed
Chrometophobia or Chrematophobia - the fear of money
Chromophobia or Chromatophobia - the fear of colors
Coitophobia - the fear of coitus
Commitmentphobia - the fear of commitment to relationships

Decidophobia - the fear of decisions or making decisions
Deinophobia - the fear of dining and dinner conversations
Dementophobia or Maniaphobia - the fear of insanity
Demophobia, Enochlophobia or Ochlophobia - the fear of mobs or crowds
Dipsophobia - the fear of drinking
Doxophobia - the fear of expressions opinions or receiving praise
Dysmorphophobia - the fear of deformity or unattractive body image
Dystychiphobia - the fear of accidents

Eleutherophobia - the fear of freedom
Ephebiphobia - the fear of teenagers
Epistemophobia - the fear of knowledge
Eremiphobia, Orlsolophobia - the fear of oneself, or of being alone
Ergophobia or Ponophobia - the fear of work

Gamophobia - the fear of marriage
Geliophobia - the fear of laughter
Gerascophobia or Gerontophobia - the fear of the old, or of growing old
Glossophobia - the fear of speaking in public

Hedonophobia - the fear of pleasure
Hellenologophobia - the fear of Greek terms or complex scientific terminology

Ideophobia - the fear of ideas
Kainolophobia or Kainophobia - the fear of anything new, novelty (also neophobia)

Laliophobia or Lalophobia - the fear of speaking
Logizomechanophobia - the fear of computers
Mastigophobia - the fear of punishment
Melophobia - the fear of music
Metrophobia - the fear of poetry
Misanthropy - the fear of mankind in general

Obesophobia or Pocrescophobia - the fear of being overweight, or gaining weight
Papyrophobia - the fear of paper
Phronemophobia - the fear of thinking

Sociophobia - the fear of being judged, people in general or society
Sophophobia - the fear of learning
Spacephobia - the fear of outer space
Symbolophobia - the fear of symbolism
Symmetrophobia - the fear of symmetry

Tropophobia - the fear of moving or making changes

Venustraphobia - the fear of beautiful women
Verbophobia - the fear of words

Xenoglossophobia - the fear of foreign languages

[Photo by noamgalai. CC Licenced]

Tuesday, October 16, 2007

Organizational Culture & Meme Therapy


This entry is by no means a synthesis, but an idea that I may pursue. This morning I received an email from my ex-principal in Shanghai, Bernadette Carmody, and I couldn't help but think about her persistent work there with constructive group culture, and how much of a difference it made to focus upon and articulate specific features of the culture.

We know that culture manifests itself in pattered and symbolic behaviors. This got me to thinking about the concept of memetics, or the "meme" that Richard Dawkins developed in 1976. Now I'm not offering any type of novel insight into the wider cultural application of a study of memetics in society, but have studies at single sites (such as a campus) been published? It seems that there could even be an opportunity to work on direct memetic engineering, as opposed just working with generalized concepts of positive interaction in a culture.

Prior to Dawkins' memetic theory, de Bono published, "The Mechanism of Mind" in 1969. In my opinion this is still his best work, as he took a functional approach to the development of memory and behavioral patterns. It's this recognition and identification of pattern formation and habituation that interests me, as his concept of "d-lines" on a "memory surface" seem to align with memetic theory, and it offers a 'mechanical' view of why 'old habits die hard.'

Ironically, I was about to also suggest that concepts of mirror neurons, as raised by Daniel Coleman in "Social Intelligence" might also be an interesting tact for an analysis of organizational culture, but directly after his book's discussion of neural mirrors and "social synchrony," he launches into a discussion of memes! Hence, this is not new thinking, but do we have models and examples of how to apply these theories of social genetics to a workplace? It may not be difficult to find an example of a 'toxic culture' with allusion to general behavioral patterns in that group, however, do we have concrete examples of memetic viruses 'infecting' a group? There may also be examples of "meme therapy," whereby a positive meme is 'released' (either intentionally or unintentionally) and then propogated within the culture.

Beyond my initial question of whether this kind of study has been completed and published previously [ie. I need to do further reading] there is actually ongoing utility that may be derived from an organizational self-study. If an organization actively studies its own behavioral patterns and is able to identify specific memes/discrete units, then they may have a targeted strategy for enhancing their culture through codification and "meme therapy." Furthermore, the same kind of group therapy could be applied as a study within the classroom and directly relate to social transmission of memes in students' actual lives.

In terms of application within the classroom, there are a number of concepts and approaches that could be used:
(1) General use in order to enhance group/student/school culture
(2) Use in Social Science/Studies to highlight sociolinguistics
(3) Interdisciplinary use in Science to parallel study of genetic transmission
(4) Interdisciplinary use in Math to study statistical analysis
(5) Interdisciplinary use in Language Arts to study language structure and the synthesis of ideas.
(6) Interdisciplinary use in Health to study physical and psychological impact of various forms of memes. In some ways, concepts that are already promoted such as, "No put-downs" could become launchpads for wider application to other memes with negative semantic loading.
(7) Literacy: at Shanghai American School 6th graders worked on a unit about "truths," which included analysis of some forwarded emails/spam. The propogation of these kinds of items is a prime example of a memetic virus.

A study of memetics and its impact upon organizational culture could either be longitudinal or it could be packaged as a short unit to simple highlight features and ideas that can be modified within the culture. Regardless, a scientific identification of specific transmissions/memes could prove to be a powerful social and linguistic lesson for any kind of organization, as long as the procedure does not target vectors (ie. those who transmit/propogate memes), but only the memes themselves.

If you happen to know of any studies or recommended reading, please get in touch with me.

Friday, October 12, 2007

Can't Parse This? Coding Analogies for Literacy

This is one of those instances where a model in computational linguistics has actually led me back to thinking about natural language processing, as opposed to thinking in the other direction. As I was attempting to extract the .flv (Flash) file of an online video using a browser plug-in, I encountered an old friend: "Can't parse this file."

Now what does this mean? Parsing a file can operate on multiple levels: lexical and syntactic, just as parsing can operate for human beings as they process natural language. Parsing is essentially the recognition of patterning and structure within encoded meaning, and this includes text/speech.

If I listen to a foreign language that I have a cursory knowledge of, I am sometimes able to parse the text on a grammatical level, but not on a lexical level. In other words, I may be able to recognize the syntactic structure without understanding the meaning of all of the words. Likewise, if I know all of the lexical items, but I'm not aware of how the syntax influences the encoding of the idea, then I may not understand what is being communicated.

In terms of supposedly optimizing the language learning process, we have arguments for corpus linguistics that state that teaching words with the highest frequency first will assist in the parsing process. Many of the most frequent words are not even lexical items, but fall into the category of grammatical markers, such as prepositions, but then there are all extremely frequent morphemes that can help us parse a text, eg. "able" indicates that the word is going to be an adjective.

Now I intend to return to Halliday's functional grammar to revisit his concepts on alternative methods to parsing natural language, however, I have to wonder if adapting some of the systematic methods of codebreaking in a language learning class, and teaching code (whether computational, abstract, or another language) as a parallel in the language classroom could facilitate the language learning process.

My concept for application is parallel coding 'metaphors,' or analogies, in the second language classroom. Everyone familiar with Gardner's multiple intelligences knows that there are many ways to appeal to an individual's strengths and styles. Language learning has certainly come a long way, but I think we sometimes just get to a point of cosmetic embellishments. If reading and comprehension is essentially the process of parsing (including juggling items in memory), then wouldn't it be helpful to exercise these connections by directly connecting these processes to procedurally similar tasks?

In particular, I'm thinking about ways to make the process of learning "dry code" (vocabulary and grammar) both more stimulating and more memorable for students. However, the aim would not just be to improve language outcomes, but to increase intrinsic motivation, and the general flexibility of students' thinking.

Essentially, the more approaches, angles, connections and decoding skills that students are equipped with (including, hopefully, some that they can actually relate to), the more likely it is that they'll be able to translate those parsing skills across disciplines and into the life of natural language processing.

Before I get to some suggestions and examples, I'd like to state that my thinking on using structural analogies was motivated by my quest for an innovative way to approach grammar and syntax (that's my extrinsic motivation), however, intrinsically and more essentially I'm motivated as an educator to support and promote interdisciplinary learning and the decompartmentalization of 'knowledge domains,' as keeping them separate effectively cripples students in terms of activating prior knowledge and developing flexible thinking. The fact that there are bonuses in terms of linguistic reinforcement and utility is really just icing on the cake.

Some examples of structural analogies to develop concepts of syntax and grammar (articulated/full examples forthcoming):

(1) Using a computer language: due to the fact that computer/machine languages are usually based on strict syntax, computer languages are a way of demonstrating that if the code is flawed then it will not result in the desired function. Schema can be paralleled to the structure of a particular genre, and students may compare structures and elements to natural languages, eg. a command is like a verb; an object or a variable may be like a noun; a modifier may be like an adverb; opening and closing tags may function like the elements at the beginning and the end of a genre; non-optimized code may be compared to verbosity or redundancy.

(2) Movement sequences in video games could be compared to grammatical structure.

(in process... to be continued...)

Wednesday, September 05, 2007

Blogmentation ala Neurological Localization

Admittedly, the subject of this entry is probably an incomprehensible mouthful at first glance, but let me get to the point quickly. When I was in school I used to carry around a bunch of different workbooks/writing pads for each of my school subjects, eg. English, Physics, Chemistry, Math II, etc., because each book was a separate space for a separate purpose. Some people maintain different emails for different purposes, eg. one professional and one personal.

However, even though we can grasp this concept of compartmentalization for other purposes, many people still maintain a one-size-fits-all solution for their digital lives: one blog that covers everything, one Facebook account that lumps together everyone they've known, or even one YouTube account that includes representations from both professional and personal domains. There's something about this approach that seems to be somewhat unwieldly. After all, I wouldn't wear the same clothing regardless of the event, nor would I use the same register of speech, regardless of the audience, so why have we chosen to lump our communication into such an illogical model that doesn't really seem to be adapting to the number of social and professional domains that we live in?

I don't know if I'm first to coin the term, "blogmentation," but I'd like to define my use of it as "compartmentalization and fragmentation of blogging" by individual communicators. This particular 'space' is where I'm gradually coming to focus my thoughts on linguistics and communications that have more of a social/human edge, whereas I have a completely separate blog JC's Tangent where I'm gradually channelling my discussions and reflections on technology. I also have keep a personal blog where my latest creative urges or just my general movements are documented in a reasonably erratic form.

The question for the evolution of this genre/form is how will individuals choose to use blogging in the future, and how will the blogging services and platforms offer these options to them. In this sense, I'm not so concerned about people who know how to set up their own WordPress server, but how the end-user services will be presented and provided. Blogger, for example, allows a single user to set up a number of blogs, but how does the average person use this, if at all?

In the back of my mind there are some ongoing concerns that I have about the companies that are leading us into this age of "mass information management," although because that's a UI (user interface) argument, I think I'll save that for JC's Tangent, instead of LinguisticFX, as I'd really work within the boundaries and disciplines of the 'broader topics' that I've chosen, and to encourage others to "channel down" and experiment with their own experience of 'blogmentation.'

Tuesday, August 28, 2007

Got Milk?

It's been quite some time since I've updated this blog. Getting married and moving to a different country may have had something to do with this. Regardless, my last entry made the blog look like it was going to go down the road of computational linguistics, so I'd better take a step back and diversify. LinguisticFX was always intended as a 'playground' for ideas about language and linguistics that I found fun and fascinating, so let's take a step back as I take this back to what made me fall in love with language to begin with (that is, after I learned to demand things as an infant) - awe, wonder, and a little laughter thrown in for good measure.

Warning - I'm sometimes very easily amused, and apparently I need to delve into Greek roots more often.

Tonight I was enjoying a "shot of milk" - don't ask - it just makes drinking milk seem a little decadent, and I wondered what the origin of the word "milk" was. A quick search on www.etymonline.com revealed Old English/Saxon roots as:

"meoluc" and "milc" (Anglian), which were both related to the verb "melcan" (to milk). The noun is from P.Gmc. *meluk- (cf. O.N. mjolk, Du. melk, Ger. Milch, Goth. miluks); the verb is from P.Gmc. *melkanan (cf. O.N. mjolka, Du., Ger. melken); both from PIE base *melg- "wiping, stroking," in ref. to the hand motion in milking an animal (cf. Gk. amelgein, L. mulgere, O.C.S. mlesti, Lith. melzu "to milk," O.Ir. melg "milk," Skt. marjati "wipes off"). O.C.S. noun meleko (Rus. moloko, Czech mleko) is considered to be adopted from Germanic.

Of course, some of these roots took me back to the Korova Milk Bar in the Anthony Burgess novel, "A Clockwork Orange" (and the subsequent film by Kubrick), where various incarnations of "Moloko" were served. I've always enjoyed works like this that were so playful and inventive with language - George Orwell's dire "1984" actually made me grin.

However, as I followed the roots of "milk" further down the page I stumbled upon connections to lactation (let's not spoil this by looking for Freudian connections - this is about language, not oedipus or psychotherapy):

1668, "process of suckling an infant," from Fr. lactation, from L. lactationem (nom. lactatio) "a suckling," from L. lactatus, pp. of lactare "suckle," from lac (gen. lactis) "milk," from PIE base *glact- (cf. Gk. gala, gen. galaktos "milk").

Hang on... galaktos "milk"? As in the milky way galaxy?

Sure enough, the etymology of "galaxy" is explained as:
c.1384, from L.L. galaxias "Milky Way," from Gk. galaxis (adj.), from gala (gen. galaktos) "milk" (see lactation). The technical astronomical sense emerged 1848. Fig. sense of "brilliant assembly of persons" is from 1590. Milky Way is a translation of L. via lactea.
"See yonder, lo, the Galaxyë Which men clepeth the Milky Wey, For hit is whyt." [Chaucer, "House of Fame"]

Therefore, every galaxy is a "milky way," and apparently our own "Milky Way Galaxy" is particularly milky. I'd never really thought of the heavens as a divine mother before - maybe a Freudian analysis is actually the right way to go with this topic afterall?

Monday, March 26, 2007

Ubuntu Linux under Parallels: Native Screen Resolution

A number of articles have been posted in various places around the web about the problem of viewing native screen resolutions in Ubuntu when it's run in Parallels as a virtual machine. I'd like to offer my synthesis of the problem, as I noticed that some users had given up. Basically, I had to experiment and assemble a few 'pieces' in order to activate the native screen resolution of my MacBook, in order to 'conquer' the default maximum setting of 1024x768 pixels.

The whole process is run within Ubuntu - you shouldn't need to try to access any of the settings within your virtual machine:

(1) Applications > System Tools > Terminal
(2) sudo gedit /etc/X11/xorg.config &
(3) Enter your administrator password
(4) The xorg.config file will open in the text editor
(5) Scroll down in the file until you locate Section "Monitor"
(6) Edit the horizontal and vertical refresh rates which, by default, are inadequate for higher resolution monitors. I used the following settings:

Section "Monitor"
Identifier "Generic Monitor"
Option "DPMS"
HorizSync 28-64
VertRefresh 43-87
EndSection

(7) Now, scroll down to the next section: Section "Screen" and enter the native resolution of your monitor before each instance of lower screen resolutions. I used the following settings for my MacBook, but for a 15" MacBook Pro you'd use, for example 1440x900

Section "Screen"
Identifier "Default Screen"
Device "Generic Video Card"
Monitor "Generic Monitor"
DefaultDepth 24
SubSection "Display"
Depth 1
Modes "1280x800" "1024x768" "800x600" "640x480"
EndSubSection
SubSection "Display"
Depth 4
Modes "1280x800""1024x768" "800x600" "640x480"
EndSubSection
SubSection "Display"
Depth 8
Modes "1280x800" "1024x768" "800x600" "640x480"
EndSubSection
SubSection "Display"
Depth 15
Modes "1280x800" "1024x768" "800x600" "640x480"
EndSubSection
SubSection "Display"
Depth 16
Modes "1280x800" "1024x768" "800x600" "640x480"
EndSubSection
SubSection "Display"
Depth 24
Modes "1280x800" "1024x768" "800x600" "640x480"
EndSubSection
EndSection

(8) Now save your config file. Most users would also recommend that you create a backup of your original config file, in the event that you can't boot back into the OS and wind up in the command line.

(9) System > Logout > Restart

(10) In my case, as soon as I rebooted the OS (inside parallels) it instantly started displaying in my native screen resolution. Otherwise, use System > Preferences > Screen Resolution to manually select your screen res.

Good luck. I offer no guarantees that the method is foolproof, but this workflow is what I would have liked to have found on help forums in order to avoid the convoluted process I had to go through ;-)

Jonathan.

Sunday, March 25, 2007

Welcome to the Angsternet?

Call me naive, but "I sense a disturbance in the Force" on the Internet. However, whether you feel it or not is entirely dependent upon what you've seen, heard, and experienced on/about the web lately.

I opened up an online paper this morning to discover that a father of two had hanged himself on a videochat inside an 'insults' chatroom. Personally, I don't know why anyone who think that throwing around "humorous" insults at other people would be amusing - I mustn't possess the gene for that kind of 'entertainment?'

However, my thoughts on this topic began earlier this weekend when I noticed that a "guest editor" on YouTube was introducing all sorts of strange, negative, angst-driven material. I can certainly appreciate that documenting people's depression and 'hard times' can an 'art form,' but when it's unmoderated, condoned, encouraged, and let loose in the wild the psycholinguistic affect can be devastating.

When YouTube began, pretty much anything you posted was appreciated by its then-small audience. However, it's been increasingly developing into a scolding, judgmental playground where people find it 'amusing' to deride and criticize others... and, subsequently, it's developed a "big city mentality" where you can run and hide if you make mistakes, and just "switch crowds" if you seriously offend someone.

This seriously disturbs me. I have long been an advocate for utilizing the potential of such an amazing tool, but it seems like the democracy and open-minded spirit that originally drove the YouTube community is losing its way. This weekend one of the amusing users that I subscribed to a while back was "featured" on the front page of the site, but then she received torrents of abuse and criticism. I then followed her subsequent decline - she started making bleary-eyed "rants," as she was obviously harrowed by the experience. I've tried to encourage her, but of course, she's publicly putting on a brave face, and trying to act like none of this affected her. I beg to differ. This is dangerous territory - people are starting to script messages for other users like, "Why don't you just die, b***ch!" No matter how resilient you are, this can't be a positive experience for any human.

I composed a message to YouTube staff this afternoon:

"I am an educator, but I'm also a general user of YouTube. Over the past
week the featured videos (and where they're linked to) have plummeted into
depressing and unncecessarily weird avenues that are just going to spark
more dissent against YouTube by people who don't understand what it can
do. I teach in Asia, and I use YouTube as an educational tool, but there
are many schools across the US where YouTube has been completely banned.
I'm worried that the direction that the feature section is taking, and the
lack of ACTUAL good conduct in many areas of YouTube are going to be a
tipping point that creates even more oppositional sentiment for the site.

So is it just about controversy and revenue, or are the people in charge
actually concerned about the fact that the "mood" on YouTube might
actually affect the "mood" in wider society... that's the kind of power
the site has, so I'm pleading with the editors to not abuse it.

Don't be evil!"

...I love the potential of YouTube, but I definitely don't want to become part of what I've started to refer to as 'The Angersternet.' If a fruit store sells you enough rotten apples, then no matter how good it was in the past, you're going to decide to shop elsewhere.

Monday, March 19, 2007

Online adult behavior in the Brave New World

I'll make this brief, although I don't want to lose this thought, as I feel like a new tide is coming in, and somebody should at least speculate: lately I've noticed that some adult/professional behavior is starting to reflect the behavior of kids online... adults who "duck and weave" online in order to avoid conversation or confrontation. Interaction is changing, and not necessarily for the better. We teach 'netiquette' to children (the etiquette of interacting on the web), but then those who we know well in professional circles may become so absorbed in this "kids' world" that they start to emulate their behaviors, and believe that they're acceptable.

I honestly don't believe that I'm offensive online, but as with everything in life, maybe I'm on a different 'track' to you, or maybe I'm not willing to buy into your personal crusade, because my chosen focus lies elsewhere. In the pursuit of 'optimization' or whatever other nominalization you might choose to use as an excuse for poor behavior on the Internet, are you actually succumbing to poor behavior, and justifying it with the laws of the 'wild west?' This might seem lofty or 'police like,' but are we changing the rules of interaction to suit ourselves, and to further our own causes, whilst losing our humanity in the process?

If you have a process or a project that you wish to further I can fully respect that, but if you start ducking away and using an "I can't see you, and you can't see me" methodology, aren't we just playing the kinds of games that we can play with children who haven't developed beyond the "here and now" and concrete thinking?

Sunday, March 11, 2007

Internet Apotheosis: a strangely familiar theme reemerges

I usually reserve thoughts on social networking issues for my "Tangent" blog, which is located on LiveJournal's servers, but it seems to have been down for at least 48 hours, so I'm wondering if LJ has become a victim of the fickle hordes, who've flocked to MySpace, Xanga, Facebook & co. Ahem - people in glass Blogspots shouldn't throw stones! Regardless, I'm pursuing an issue that I started looking at here:
"Selling Kudos: the psychology of baiting with virtual crumbs"
http://virtualjonathan.livejournal.com/2405.html

I've run across "Getting Rich off Those Who Work for Free" (Justin Fox, Time, March 5, 2007) twice now: first online, and then flipping through the print version. The article focuses upon the "gift economy" of contributors to Wikipedia, open-source software (Linux, Firefox), Digg, Flickr, YouTube... and the list goes on. My argument was that web sites are baiting users with "virtual crumbs," eg. token rewards like hit stats and award badges on their sites... things that cost these sites nothing. In the meantime, while users provide free content for these sites the cash is pouring AWAY from the content creators: advertising revenue increases for the websites, investors pour more money into infrastructure, staffing, and R&D... and eventually the goal of the majority of these companies is to go public, or to sell the the whole bundle off at massive profits: systems, staff, user base AND a cache of user-created content.

This is all called the "Carr-Benkler" wager in tech/journalist circles at present. Benkler is a Yale law professor, and Carr is a business writer. Their debate is whether this "gift economy" model can continue to operate, and at which point it will become monetized. However, this is not my focus for writing this.

We find ourselves in familiar territory in terms of what humans have been battling for eons: the struggle for power, fame, and wealth. Increasing numbers are posting videos to YouTube in search of fame, and possible propulsion into commercial wealth. One prime example is the comedy duo Barats and Bereta, who were two students studying at a Jesuit college in the US, who discovered that YouTube was a powerful launchpad into a commercial career as comedy developers. Now they have a network TV deal, and the sheep are lining the hallways of YouTube in droves in search of the same kind of success.

However, is it too late for most to achieve the same kind of success? Has YouTube become too much of an 'American Idol' for the lowest common denominator to ever break through unless they're essentially talented, original, and work well with production teams in order to attract the attention of the masses? As 'early adaptors' they utilized the technology intelligently and profited.

Millions of teens dream of becoming the next Michael Jordan, Britney Spears (OK, not so much at the moment), Justin Timberlake, LonelyGirl, or whatever "dream" they have of ultimate commercial fame. However, the majority of their efforts and contributions are currently being cannibalised by corporations, and they'll never see anything in return for their efforts. I have a cynical outlook on all of this, because anything that ultimately leads to massive fame and fortune is eventually going to be swallowed by the bellies of the corporations. There are extremely brief windows for innovators to break these rules, but once the window closes everything is back to 'business as usual' in the entertainment industry to develop a commercially viable following.

So, what can we do with this information? Direct the strategy back at the target market/most succeptible (ie. teenagers) in order to cannibalize their disposable cash...and time? The answer for most of us may be quite simple: work hard at specializing in a field with an enduring 'shelf-life' and continue to upskill in it, while upgrading our skills in peripheral technologies and strategies.

My suggestion is to continue to teach people about how to deconstruct the media, so that they can continue to focus upon core values and work ethics. There are literally millions of virtual hallways to get lost in every day, but where are the 'guiding lights' in the information age? I fear that we're losing ourselves in our own achievement - the creation of this fantastic communication network, but it's become its own "Babel." We know that we can connect the world, but many people still seem to be stumbling around in drunken wonder at this achievement.

It's time to refocus. We've created the machine, but now what are we going to do with it? Is 'it' controlling us, or are we controlling it?

Friday, March 09, 2007

TESMC Podcast Channel

Welcome to the TESMC podcast channel for teachers studying "Teaching ESL Students in Mainstream Classes" at SAS in Shanghai. Open up iTunes and click "Add to iTunes" to get automatic downloads of our audio and video programs.
Download iTunes here if you don't already have it!

If you're playing a video, then once the video has downloaded into the PODCAST section of iTunes, click on the program, and it will start playing in a small box on the bottom left hand side. Click INSIDE the box, and it will pop up in FULL SIZE.

Note: these video podcasts are not designed to transfer to a video iPod. They will only play inside iTunes.

Click here to get your own player.

Wednesday, March 07, 2007

Thoughts on "acceptable misuse" language policies on the web

Software and web sites are fundamentally driven by the underlying language/code that provides either the operating system or a web browser with instructions. If there are errors in spelling or syntax within the code, then the computer will reject the intent of the message/instruction, and refuse to execute the wish of the programmer. Writing works in much the same way, but with a much more unpredictable audience. Unlike linguistic parsing within a computer (which offers second chances to rewrite code if the message doesn't get through the first time), a human user may actually tolerate errors, but a human audience also has the capacity to walk away from the writer's work permanently.

Friday, February 23, 2007

Should 'of' be a preposition?

This is an excerpt from Thomas Bloor & Meriel Bloor's, "The Functional Analysis of English: A Hallidayan Approach"
(Hodder Headline Group, London, 1995)

p. 147
Prepositional phrases with 'of'

We mentioned that of is the most frequently occurring preposition in English. This information comes from Sinclair (1991), who calculates that of occurs more than twice as often as any other preposition. Sinclair's observations, which are based on a massive collection of English text, the Cobuild corpus, challenge some of the standard grammatical descriptions. As we have seen, prepositional phrases realize two main functions: Adjunct in a clause and Postmodifier/Qualifier in a Nominal group. Sinclair points out that it is generally assumed that the most typical (that is, frequent) function of prepositional phrases as a whole is the Adjunct, and for most prepositions (in, on, up, and so on) this is true. However, he notes that although OF does, like other prepositions, show up with this function (for example: 'convict these people of negligence') such occurrences are relatively rare, and the overwhelming majority of phrases of OF are Postmodifiers. He also notes that, unlike most prepositions, OF has no basic spatial sense (of direction or position); compare UP, ON, IN, OVER, UNDER. On grounds of distribution and typicality, Sinclair goes on to suggest that perhaps OF should not be classified as a preposition at all, but belongs to a class of its own. To date, this position has not gained widespread acceptance, but the argument is a powerful one.

Web 2.0 Engineering: Site Structure

Here I'm not referring to "engineering" in terms of building a code base, but the implementation and integration of 2.0 tools into basic web frameworks, like a blog or a (fomerly) static site. I'm going to do this simply by providing a record of my current central structure [attached graphic].


[Click on model for full size]

The model provided here does not include hardware and software that would also 'plug in' to this structure. In fact, it's the interassembly of many different layers of negotiation of LANGUAGE that is required in order to make several websites function. At every layer, whether it's using video editing software; typing simple HTML into a blog editor; remembering where each web component is located and what its function is - there's a deeper cerebral model that could be "mapped," although chances are that a neurological model would appear to be significantly 'messier' than a simple PowerPoint! As an example, the additional software/hardware components are located here: How to build Virtual Jonathan 2.0

If I took ALL of these components and linked them (for example, my YouTube channel is reliant upon use of video editing software, sound editing, a video camera, a computer, a cable), then I'd start to create a functional model for how this system actually operates. As a two dimensional model it's extremely limited and now I'm moving toward my central point:
* Language cannot be effectively mapped in two dimensional models such as linear grammatical structure.