mobilt bredbånd

Understanding the terms related to Mobile Broadband (Mobilt Bredbånd)

Many of us sometimes encounter a situation where the topic of the discussion is mobilt bredbånd and Internet speed. There is a lot of arguing going on, but it is really impossible to participate. Why? Simply because the humble man inside many of us has never heard the terms like LTE, EDGE, GSM or etc. Let it be very clear though, you simply cannot survive without having basic information about the general practices involved in the world of Mobile Broadband. This article aims at clearing doubts about some of the scariest looking terms one can find on internet. Let us have a thorough look at them.

Mobile Broadband Generations:

Mobile broadband advertisements are full of terms like 2G, 3G and 4G. Ever wondered what this “G” stands for? It stands for Generation. A generation here means that in a certain time period, an update is made to the existing technology. This involves improving transmission technology, introducing newer frequency bands, widening of the hertz limitations and tweaking a few other things here and there. Well, this whole update leads to a complete phase change. A 3G becomes 4G.

The different generations involve different protocols. Let us have a look at some of them and their grouping, before we define these terms in detail.

2nd Generation: GSM GPRS, GSM EDGE

3rd Generation:  UMTS HSPA, UMTS TDD, GSM EDGE EVOLUTION

4th Generation:  LTE, LTE advanced, HSPA +

Now let us have a look at what do these stands for:

  • GSM stands for global mobile communication.
  • GPRS stands for general packet radio service.
  • EDGE stands for Enhanced Data rates for GSM Evolution
  • UMTS stands for Universal Mobile Communications System.
  • HSPA stand s for High speed packet access.
  • TDD stand for Time Division Duplexing
  • LTE stands for Long term evolution.

There is a lot of terms right, so which is better. What to be preferred over what? It is just as good if there was a one-line summary of the speed comparisons. Your requirement will totally be based on how you plan to use the internet. For instance, a few GB per month is enough for an average user.

 

What mobile broadband is most widely subscribed?

Figures keep on toppling as the technology marches on. While 2G ruled over the world in 2011, the situation is quite the antagonistic one at the moment.  Over 50 percent of the mobile broadband users have the access to the 4th generation protocols. Even more startling is the fact that the yearly subscriptions for the mobilt bredbånd connections are increasing at a rate of 50 percent. This is despite the lack of proper infrastructures i.e. cell towers and incredibly low speeds in many areas still. A major factor for this exponential growth is the ever so increasing demand of smartphones, which eventually lead to more broadband subscriptions.

There are many questions that are still unanswered, of course. Broadband (Bredbånd) is a field too wide to be summed up in a mere article or so. Hopefully, this article helped you in acquiring a basic understanding of the terminologies!

laptop-med-mobilt-bredbånd

Mobile Broadband: An Evaluation

Over the past few years, internet has taken a place in our lives as a necessary evil. Mentioning the total reliance upon the digital world sounds a apocryphal statement these days. The domestic life, trade, education, sports. Almost everywhere, there is a cry for a speedy internet. In an attempt to improve the reliability and the speed, mobile broadband was introduced, almost as an antidote to turtle slow line up connections. Its success is highlighted by the awe-striking stats alone. The ever-enlarging swarm of subscriptions alone is a representative of the technology’s success. Probably a major factor was played by the incessant improvements; here the reference is to successive generations. Well, Mobile broadband is a reality today.

Now it is prudent to judge anything, even it sounds great. A critical analysis is bound to bring out the best and the worst in anything. Mobilt Bredbånd should be no exception either. Let us have a look at the pros and cons of this wonderful facility in detail.

Advantages:

  • The accessibility is just wonderful. No need of wires if mobile broadband is your choice.

The ease of access is just celestial as compared to the lineup connections. If you are bound for a long journey although you really had to complete an online task, do not fret. Just turn your 2G, 3G on.

  • Easy to use. Definitely, when you compare it to setting up a dial up connection. Also, there is a onetime installment.
  • You do not need a line phone if you are planning to install the mobile broadband.
  • It is more secure as compared to the other options, here the data is encrypted.
  • Voice over internet protocol is cooler over the mobile broadband.

Disadvantages:

There are always some bad spots, no matter how shiny the surface is. Let us have a look at some of these unwanted features of the mobile broadband:

  • There is no guarantee of a streamlined signal supply. Hindrances can cut out on the signal efficiency.
  • There is a particular limit to use the mobile broadband. Companies provide these “packages” as they call them. You can use a certain number of Megabytes and gigabytes only. This involves both uploading and downloading. Even in the case of unlimited broadband, you will most likely be transferred to a slower service if you go above a certain level, though this access will not cost you anything.
  • Too much of a good thing can be actually harmful. People forget when to stop surfing if they have too much to spend on packages of mobile broadband.
  • These bytes are quite hard to come by. Prices are shooting up all the time, because now the companies have realized that without their services, people will be entirely dependent upon the old turtles, the dial up connections.

Hopefully, now you can choose wisely between what is frugal or expensive for you. Speed thrills but it kills as well, so do not only go for the faster connections!

Mobile Broadband Generation

Mobile broadband (mobilt bredbånd) is a term used to describe wireless access to the internet through the mobile phone, USB, wireless modem, tablet and portable modem. The technology was developed in the late twenty century after a magnificent progress in electromagnetic technology. The mobile broadband utilizes the radio waves to transmit the signals. They have been a distinct and great development in mobile broadband, denoted as generations. The variation from one technology to another is basically the technology used and the speed improvement in the generation. Currently there is 5 generation mobile broad band; however, it’s not yet officially approved. In case your settled line connection goes down, mobile broadband can likewise be helpful as a backup solution

1G

This refers to the first generation of mobile communication. The technology was invented in 1979. The technology used to transmit data is analog, unlike its successors. 1G was commercialized within the shortest time possible. With time, there were several limitations noted and this prompted into an advance in technology in order to curb this challenges.

2G

The technology was officially launched in 1991 in Finland.  2G use digital signal to transfer data thus opening room for more technological advancement. The introduction of 2G enhanced the use of other services for the first time such as SMS (short message service). More advanced data transfer such as pictures and MMS services realize were realized. Digital coding help to realize a more clear voice thus making communications more reliable. Further development in 2G technology enables the internet access speed up to 56kbit/s, hence enabling the access of some technologies such as the Wireless Application Protocol, emails, and World Wide Web.

3G

The technology provides a higher data transfer rate of at least 200bits/s. This high speed enabled the access to more advanced services such as the fixed wireless internet access, video calls, and mobile TV technology.  It uses packet switching technology where data is transfer as a packet rather than the circuit switching which was used in the 2nd generation.  The speed of downloading and uploading data increased drastically. Many people wanted to access the internet, thus the need to make it more accessible through mobile broadband.

4th generation

The fourth generation technology provides more advance service to the mobile broadband users. Many people, especially the business people benefits. Some of the services accessible in the fourth generation include 3D television, cloud computing, high definition mobile TV, IP telephony and video conferencing. Video conferencing is widely used today especially for people who operate a large scale business. 4G data transfer can reach its beak of 100Mbit/s thus enabling an even video streaming possible and reliable.

 

5G

5G is the next major phase of broadband technology under development. The major features in the fifth generation are the improvements in the speed of up to 1GB data transfer simultaneously to many workers in the same workplace. The area coverage of the 5th generation is also expected to improve. The generation is also intended to curb the current trends in the world such as the internet of things and broadcast-like service.

That Joy In Existence Without Which The Universe Would Fall Apart and Collapse

Few months ago I suddenly got the urge to look up one of my favorite authors, Madeleine L’Engle, online. I knew she was elderly, but I figured that perhaps she might have some contact information posted on the Web that I could use to write to her. One of the pages I found did include an address, only it wasn’t hers specifically — she’d apparently been living in a nursing home for several years following a stroke in 2002.

I was glad to learn that she was still alive (albeit somewhat worried about her health), but I didn’t end up writing to her. My hesitation was due to a combination of procrastination, cynically figuring that my letter might not get to her at all if other people were managing her administrative affairs, and not really being sure what to write in the first place.

I wrote down the address anyway and stashed it in a drawer, imagining that perhaps at some point in the future I’d pull it out again and give writing a letter a try despite my doubts about L’Engle actually receiving it.

Sadly, though, Madeleine L’Engle died on Thursday, September 6, 2007. Now I have no way to thank her personally for what her books have given me over the years. So instead I am writing this, hoping that it will express some sense of how A Wrinkle In Time and its sequels continue to inspire me in thinking, writing, and dreaming.

While I realize that this writing doesn’t exactly mesh with my usual subject matter, I figure it’s plenty appropriate considering that L’Engle’s books are part of the reason I write publicly to begin with.

The Books

Madeleine L’Engle was a prolific author — she began writing in early childhood and published over 60 books over the course of her life. While she is most well known for her fiction, she also wrote poetry and a number of spiritually-themed books (L’Engle identified as Christian, however, she was most assuredly not a fundamentalist, and noted that fundamentalists tended to dislike and fear her works because they saw spirituality as a “closed system”, whereas she saw it as an “open system”.) Most of her fiction ended up being grouped by the industry into the childrens’ market, however, she resisted classification as a “childrens’ author”, and she refused to (much to her credit) “write down” to her readership. She did not believe in underestimating what children (or adults, for that matter) would be able to grasp, and wrote accordingly.

I haven’t read all L’Engle’s books, and some of them likely veer off into directions that I wouldn’t find all that compelling, but I will probably seek out at least a few more and read them eventually. Here, I will focus on the three books of L’Engle’s that I’ve owned copies of since childhood, and read more times than I can count: A Wrinkle in TimeA Wind in the Door, and A Swiftly Tilting Planet.

Wrinkle

A Wrinkle In Time, arguably her most celebrated work of all, was rejected by numerous publishers prior to being published in 1962 — the manuscript, with its elements of science fiction, fantasy, and philosophy, was thought to be far too strange (and far too challenging) for the market. However, someone finally decided to take a chance on the novel, and it ended up winning the 1963 Newberry Medal. From that point onward it worked its way into school libraries and bookstores and consequently into the minds of several generations of curious young (and old) readers.

As a fifth-grader I read Wrinkle primarily as a straightforward sci-fi adventure story, featuring one of the first female protagonists (Meg Murry) that I’d ever actually been able to relate to. In particular, I found compelling the book’s introduction of the concept of tesseract — the geometrical element of a fantastic travelling method using the “folding” of spacetime, which allows people to traverse extreme distances instantaneously. I remember spending long, intense moments staring at the pages in the book showing a diagram of an ant crawling along the hem of one character’s skirt, utilizing a fold in the skirt to “skip over” a length of the garment’s fabric.

I also remember reading (over and over again) the section of the book where the characters describe how “squaring” a line produces a square, and how squaring the second dimension produces a cube. The book uses the convention of describing the fourth dimension as time, and the fifth dimension as the tesseract — a construct integrating space and time in such a way as to allow the wormhole-like transit method used by the protagonists throughout Wrinkle to visit multiple planets (and still return home in time for supper).

Of course, certain fantastic liberties are taken with the tesseract concept in Wrinkle, but the underlying idea of how different spatial dimensions relate and build on one another is sound. I literally never looked at the world the same way again after reading Wrinkle — while I’d certainly been aware of the existence of lines, squares, and cubes before, I’d never thought of them as so profoundly significant in terms of the very structure of reality. I developed a very strong interest in the concept of “dimensions”, eventually going on to greatly enjoy another book which explored the concept more deeply — The Fourth Dimension, by Rudy Rucker.

Wind

The first sequel to WrinkleA Wind In The Door, was published in 1973 and continued the chronicles of the quirky Murry family (particularly Meg and her youngest brother Charles Wallace). Where Wrinkle charts its course through outer space, Wind plumbs the depths of inner space as the characters race to find the cause of a mysterious, deadly illness threatening the life of Charles Wallace and countless others.

Wind is more abstract and difficult than Wrinkle in some respects; the reader is introduced to worlds and landscapes constructed entirely of thought, to a creature who is at once singular and plural, to journeys that flagrantly disregard usual notions of scale and proportion. As with Wrinkle, however, Wind takes an element of real science (in this case mitochondria, which are the small, energy-generating organelles in living cells) and uses it as a springboard for an intricate and compelling fantasy tale.

One of Wind‘s opening pages describes young Charles Wallace’s first day in first grade as follows:

“Your parents are scientists, aren’t they?” [The teacher] did not wait for an answer. “Let’s see what you have to tell us.”

Charles Wallace (“You should have known better!” Meg scolded him that night) stood and said, “What I’m interested in right now are the farandolae and the mitochondria.”

“What was that, Charles? The mighty what”

“Mitochondria. They and the farandolae come from the prokaryocytes —”

“The what?”

“Well, billions of years ago they probably swam into what eventually became our eukaryotic cells and they’ve just stayed there. They have their own DNA and RNA, which means they’re quite separate from us. They have a symbiotic relationship with us, and the amazing thing is that we’re completely dependent on them for our oxygen.”

“Now, Charles, suppose you stop making silly things up, and the next time I call on you, don’t try to show off. Now, George, you tell the class something . . . “

In addition to feeling Charles’s pain and frustration at being accused of “showing off” merely by talking about his favorite interest, I found myself after reading this passage utterly fascinated by the notion of little parts of our cells having started out as discrete organisms. I remember fairly bursting with excitement by the time we got to the “organelles” section of seventh-grade science, because I knew that we were going to get to learn about real mitochondria (which function somewhat differently from mitochondria as described in the book, but which are definitely actual organelles).

I knew that there were not really tiny blue shrimp-mouse creatures (I never said Wind wasn’t a weird book!) living in our mitochondria, but I was plenty interested in learning how the little organelles actually did work. Certainly, A Wind in the Door had a hand in helping forge my present interest in the technical side of longevity medicine (since one potentially important areaof aging research directly involves mitochondria).

A Wind in the Door spends a lot of time playing with concepts of scale — flips back and forth between immense and miniscule, inside and outside, cosmic and mundane. If I had to sum up the book in one sentence, that sentence would probably be, Yes, the little things matter..

Planet

A Swiftly Tilting Planet was published in 1978. This book takes place chronologically about nine years after the events described in A Wind in the Door — Meg is twentysomething and married by this point in time, and Charles Wallace is fifteen.

Planet initially finds the Murry family, united for a pleasant Thanksgiving dinner — all seems well and ordinary until Mr. Murry receives a phone call from the President informing him of a possible impending nuclear threat. Considering the time in which Planet was written, this is not a surprising plot point. Charles Wallace’s ensuing quest is prompted by a surprise charge from the usually taciturn Mrs. O’Keefe (Calvin’s mother), who ridiculously (or so it seems) proclaims that Charles may be able to mitigate the nuclear threat.

Rather than using a concept like mathematics or mitochondria as a jumping-off point for its explorations of character and meaning, Planet instead dips into history and geography, drawing upon such half-legendary notions as the idea of Welshmen visiting the New World (even before the Vikings supposedly did) and intermarrying with Native Americans. Planet is therefore a bit heavier on the fantasy and lighter on the sci-fi than either of its predescessors (one of the main characters in Planet is a unicorn), but it still plays curious games with time and space.

Planet is intensely atmospheric, intensely odd, and bit on the dark side. The first time I read Planet, I found certain sections almost too intense to process — this book delves deeply into the family history and ancestry of some of the characters, and there’s a fair bit of dysfunction and violence revealed in that exploration. Mostly this has to do with Meg’s husband Calvin O’Keefe’s lineage, though Charles Wallace ends up intertwined in this historical thread when the unicorn Gaudior takes him back in time (and into the bodies and minds of various young men throughout the ages).

Theology

As mentioned earlier, Madeleine L’Engle considered herself Christian, and some who read her books seem to see specifically Christian symbology everywhere (though fundamentalists, predictably, see much of her work as heretical). She was fairly outspoken regarding her own personal faith throughout her life, but not in the sense of preaching to (or trying to “convert”) others; she seemed to be one of those who believed that everyone had to find their own path to understanging reality’s less tangible aspects.

With regard to reality’s more tangible aspects, L’Engle clearly held the utmost respect for science, and for scientists; many of her major characters are top-notch physicists and biologists, and Meg Murry is brilliant in mathematics. The characters in WrinkleWind, and Planet may make the occasional religious reference, but not obtrusively so, and none of the protagonists seem to be strict churchgoers.

In many respects, L’Engle’s themes are actually highly subversive and even transgressive to the point where I would challenge any fellow atheist to read Wrinkle and its sequels and come away with nothing of value. Any author that managed to publish a book in 1962 wherein the protagonist was simultaneously a girl and good at math (and in which said protagonist’s mother was a double PhD in biology who spent more time staring into an electron microscope than dutifully tidying up) is obviously no “Focus on the Family” sycophant. I think it would be just as much of a shame to pigeonhole L’Engle’s writing into being “for Christians” as it would be to pigeonhole it into being “for children”.

In reading her books, one gets the distinct sense that L’Engle had no patience with people who let their personal fears and prejudices masquerade as morality. Yes, L’Engle was Christian, but she did not write (or think) according to anyone’s dogma; her concept of God seemed to be more of the “awe at the sheer magnificence of existence” sort than of the “grumpy bearded fellow waggling his finger at homosexuals and wearers of mixed fabric” sort.

All that said, despite the fact that her stated beliefs differed from mine, I’ve always felt much in common with L’Engle on philosophical matters. There is nothing in WrinkleWind, or Planetthat threatens rationality or discourages inquiry. Plus, despite what some contemporary debates might have you believe, there’s a lot more to a person’s worldview than simply the fact of whether or not they believe in God(s).

Good and Evil

L’Engle’s books, like many fantasy novels, concern themselves quite a bit with the struggle between the forces of good and the forces of evil.

In Wrinkle, evil is described in the form of a dark shadow that covers the earth, a shadow which has been there so long that most people interpret its presence as normal.

The exact nature of this shadow is not explained in Wrinkle; it is described generically as being “the powers of darkness”, and its influence is creepily illustrated through the description of the horrible planet, Camazotz, on which Meg’s physicist father is initially imprisoned (he and a colleague accidentally ended up there in the course of one of their top-secret physics experiments).

Camazotz is a nightmare of enforced conformity and bureaucracy; children are expected to all bounce balls and jump rope in precise rhythm, and if any of them deviate even slightly from this, they are subjected to painful behavioral treatments. Everything requires paperwork. Anyone who so much as catches a cold is “put to sleep” (i.e., murdered) so as to spare them any “suffering”.

In short, Camazotz is a lazy philosopher’s utility maximizer gone horribly wrong. L’Engle aptly demonstrates in Wrinkle that evil is not necessarily the sort of thing one can identify by looking for stereotypes of mindless malice, but the sort of thing that can come about when people oversimplify reality to such a degree that their efficiency drive becomes destructive. Ethical negligence can be just as terrible in its effect as a deliberate breaching of ethics. And when malice does emerge, it can be the effect of (rather than the root cause of) the power imbalances that ensue from this negligence.

L’Engle’s literary concept of evil is developed in further detail in Wind and Planet, and personified in the form of the Echthroi (Εχθροί) — a term which means “the enemy”. The echthroi seem to be the embodiment of destructive nihilism. And despite having neither form nor voice, they are some of the scariest “villains” I’ve ever encountered in fiction.

In Wind, the echthroi are portrayed as the perpetrators of a phenomenon called “Xing”, which is basically the active negation of someone else’s personhood. Humans, other sentient creatures, and echthroi alike can X others — the echthroi are (like The Nothing from The Neverending Story, and The First Evil from Buffy the Vampire Slayer) the fundamental force which is served and strengthened by evil acts even as it inspires the hate and despair that prompt such acts.

The echthroi (and the “Xing” concept) are frightening on that visceral level that anyone who has ever faced a bully will surely recognize. The negating impulse inherent in bullying is shown to be the very same brand of evil that results in people being burned as witches, or deemed “inconvenient” (e.g., because they stand in the way of someone’s ambition for the throne), or tossed aside as insignificant or useless due to some perceived imperfection.

The concept of good as expressed in WrinkleWind, and Planet is a very active one — good is not a passive quality, or simply a feeling, but something people do. In many respects, this characterization of good is practically synonymous with love. Not love in the sense of infatuation or even romance, but rather, in the sense of actively respecting someone’s personhood and helping them to find their own way of seeing joy in existence.

L’Engle’s characters tend to learn about love through breaking out of the common delusion that love happens according to a formula or a set of token symbols. Often, love involves learning things you’d rather not learn, and in risking losing your sense of comfort in the world for the sake of knowing what is actually true.

While WrinkleWind, and Planet make occasional references to gods of every stripe from Abrahamic to Celtic to Native American, L’Engle does not rely on these superlative entities to transmit the idea of what goodness is. Rather, she relies on the personal journeys of her characters (flaws and all), in order to demonstrate that being good is not about being all-powerful, but about making certain observations about reality and acting accordingly.

On Naming

One of the fantasy elements in Wind that I think bears particular mention here is the notion of people having particular vocations, or “callings”. Meg Murry, for instance, is a “Namer” (whereas two of her brothers are “Teachers”). Another character (Proginoskes, who is nonhuman, looks like “a drive of dragons”, and is either immortal or extremely long-lived) describes how he once had the task of memorizing all the names of all the stars in all the galaxies. The point of this exercise was to “help them each to be more particularly the particular star each one was supposed to be”.

I am almost reluctant to try to describe the personal significance this Naming concept has for me, because I am afraid that no matter how I try, it still might come across as trite. But I am going to attempt it anyway, because a lot of L’Engle’s goodness mythos is intimately tied to many of the notions of uniqueness, self, and identity that figure prominently in my own writing and thinking along these lines. The little anecdote about naming stars above might sound simple, but in my own private symbology (that I rarely, if ever, find the ability to describe in words), it is anything but.

Acknowledging the “little things” — the small and seemingly mundane details of existence — is a personal habit that borders on the sacred for me. When I leave work in the evening, I am often beside myself with joy as a result of seeing the tributaries of a particular crack in the asphalt, or of seeing a splash of patterned light (filtered through the windblown leaves of a tree) race across the ground as a breeze cools my face. And in some weird sense, I feel very much at these times as if I am exchanging information with the universe-at-large — I am existing and perceiving the little things that make it up, and at the same time, those things are responding to my presence via the diversion of air current around my nose and the whisper of photons glancing off my retinas.

I know it might sound silly, but I guess I feel like there should be people who know the names of stars, and of leaves, and of sidewalk cracks, for that matter. As someone who used to read the dictionary for fun, and who still enjoys memorizing the ingredients label of every food or toiletry item that comes into my home, I can see perfectly the logic of memorizing stars.

I see the sum total of conscious minds in the universe as a sort of network through which information is processed into joy and beauty and art and music and mathematics (and all those other delightful forms into which we can now channel the energies evolution has serendipitously gifted us with). And the more different kinds of minds there are, the more the totality of sentience gets to experience of what there is to experience.

In short, it is all well and good to raise your eyes toward a fireworks display with your neighbors, but do not necessarily believe that the youngster watching a caterpillar slowly inch along the ground during the light show is “missing out”.

So, while I strongly support the right of all persons to self-configure to the greatest degree possible, I think it is also important to avoid establishing overly nihilistic concepts of self that dismiss personal uniqueness (and the constraints that all of us face inasmuch as we can never be all things at once) as “essentialism” or “identity politics”.

When Meg Murry learns to appreciate herself for who she is rather than pining to be someone else, this does not mean that she stagnates or tries to define herself according to a particular hairstyle, or on the basis that she wears glasses. It simply means that she learns to look within herself and see how to use the reality of how she is configured to accomplish her goals and grow up into a more competent and confident individual. Not according to the status quo, but according to a more personalized (and in many ways, more rigorous) set of standards.

Regardless of how someone gets to be the way they are — whether they are born that way, or whether they become that way as a result of experience or development, or whether they choose to alter themselves over the course of their life — they still exist in a particular form at every point. And there is a kind of art and skill to being able to know the ins and outs of one’s form deeply. L’Engle’s characters’ journeys often involve coming to this level of self-awareness, and it is a great strength of her books that this is accomplished without recourse to platitudes or cliches.

Joy

Now, to explain the title of this article, and its connection to my writing. The title of my blog, Existence is Wonderful is basically my attempt at shorthand in expressing That Joy In Existence Without Which The Universe Would Fall Apart and Collapse.

This is a phrase that is repeated throughout A Swiftly Tilting Planet — it is the stated meaning of the names of two characters (Ananda, a dog, and Gaudior, a unicorn). It is also the fundamental essence of Charles Wallace’s quest in that story: to help the world recognize joy again. In Planet, the nuclear threat that drives the plot is representative of that basic, chill despair that sets in when a person decides that nothing means anything and that it therefore doesn’t matter if it all goes away.

With that in mind, part of what I aim to do when I write — whether it be about life extension, or neurological variation, or any of the other topics I cover fairly regularly — is to get the message across that the universe is simply teeming with meaning and opportunities to experience joy.

Just because life won’t grab you by the collar and tell you what it means doesn’t mean that it doesn’t mean anything.

I believe that it is far better to see the pursuit of meaning and joy as a creative process than as a passive one.

And toward that end, I will continue to publicly express the sentiment that existence is, undoubtedly and infinitely, wonderful.

Rise of the Robocars!

Jamais Cascio at Open the Future recently posted (along with his own commentary) a link to an essay by the Electronic Frontier Foundation’s Brad Templeton: The Implications of Robot Cars and Taxis.

As one of the seemingly few non-driving Americans out there, and as someone who finds robotics in general pretty fascinating, I’m tremendously interested in the promises, prospects, and particulars of robotic vehicles. Though perhaps not as high on the list of “things I’d like to see in my lifetime” as, say, drastic improvements in social justice, effective longevity medicine, and widespread scientific literacy, robotic cars are definitely somewhere on said list. Not only could they benefit the environment (by automating certain common driving tasks and making them more efficient, thereby saving fuel), they could potentially provide new options for those today who cannot drive regular automobiles, as well as drastically reduce traffic injuries and fatalities across the population.

Anyway, Templeton’s essay is definitely worth a read. He points out, quite rightly, that it was only a few years ago when the very idea of self-driving cars was considered pure science fiction. Now, given the impressive (and improving) performance of the autonomous vehicles in the DARPA Grand Challenge (a military-sponsored contest in which teams competed to create cars that could navigate a track without a human driver), more and more people are beginning to seriously consider robot cars as a potential reality for civilian applications.

Templeton also emphasizes what I often see as a much-neglected truth about automobile safety these days: that is, driving isn’t particularly safe for anyone, not just those of us whose perceptual systems are optimized for activities other than driving:

Car accidents kill about 45,000 people every year in the USA, and a million around the world. They injure and maim millions more, and tear apart many more lives with grief, for these are all premature deaths, often among the young.

Consider that number in context. That’s just a bit fewer than the numbers who die of Alzheimer’s and Influenza, and more than the death toll of kidney disease, infections and suicide. It’s double the death toll of liver disease and hypertension and nearly triple that of homicide. It’s more than most individual diseases and cancers.

For young adults 15-34, of course, who do not fall nearly so often to heart disease or Alzheimer’s, it the leading cause of death among the established categories.

Cars make life a lot more convenient for a lot of people, though — to the point where I think many lose sight of the risks involved, or consider that they have to accept these risks because they don’t have any other viable transportation options. Now, of course there’s no guarantee that robot cars would be safer for the mere fact of being robotic, but it is definitely true that a well-designed robotic vehicle might very well be able to avoid some common areas of egregious human error. Templeton notes:

[The fact that the cost of accidents is arguably the single largest component of the per-mile cost of driving a vehicle] is important because to be accepted, robocars must have a dramatically lower rate of accidents — as close to zero as possible. While no software system can every be truly free of bugs, because a “crash” here has a literal as well as metaphorical meaning, teams must work particularly hard. In addition, these technologies will arrive incrementally, in the form of “crash-resistant” cars which are still mostly driven by people.

The essay goes on to discuss the potential cost savings of robot cars, the areas where such cars might first be deployed, the attributes of today’s vehicles that might suggest we’re moving in the direction of “smarter” vehicles, etc. Check it out if you’re curious about such things — regardless of whether you agree with all the premises and conclusions, it’s a good, comprehensive collection of thoughts on the subject of robotic vehicles.

Personally, when I think about what it might take to get robot cars deployed and put to use, I think not only in terms of the cars themselves but of the infrastructure they’d inhabit. A while back, I commented on a really neat article from a 1968 Mechanix Illustrated piece that attempted to describe the world of 2008 (which we now inhabit). I’ve long loved reading retro-futurist stuff (ever since I found a pile of ancient Science Digests in my great-grandmother’s basement as a youngster), not only because it can be highly amusing, but because it can provide interesting insights into what the priorities and biases were in the past.

Anyway, the Mechanix Illustrated piece was particularly fascinating in that it ended up juxtaposing several eerily accurate predictions with several that just sound silly, to an even greater degree than I normally see in articles along similar lines. After reading it, I got to thinking about what characterized the accurate predictions vs. the inaccurate ones, and the main thing that came to mind was that it seems to be a lot easier to predict advances in communication and commerce than in large-scale infrastructure.

In other words, the article’s description of television-telephone systems that allow families to shop for products from their own homes sounds a heck of a lot like Amazon.com and their ilk, but its description of gigantic super-domes over cities and special roads populated entirely by fast-moving autonomous vehicles sounds frankly kitschy given how 2008 actually ended up looking.

Much of 2008’s urban/suburban landscape looks very similar to 1968’s these days (at least based on pictures I’ve seen; I wasn’t born until 1978) — we’ve still got asphalt-paved roads, internal combustion vehicles everywhere, houses with peaked, shingled roofs and brown carpeting, etc. Sure, people are dressing differently these days, cars have different contours, and shopping malls are looking shinier, but most of that is essentially “window dressing” and fashion as opposed to unheard-of developments hastening a move toward crystal spires and togas.

Most of the things that might actually count as “revolutionary” developments (as usual, keeping in mind that over half the world still lacks flush toilets) remain subtle, even furtive: cellular towers blending inconspicuously along stretches of freeway alongside silos and power poles, blue CAT-5 cable stuffed and strung like bundles of blue spaghetti behind pithy office ceilings outfitted with flickery fluorescents, tiny computers nestled in purses and pockets. Certainly at least some lives, and much of the communication and commerce infrastructure have changed very much since 1968 — but the physical landscape, and the ways in which we get around from place to place, really haven’t.

Nevertheless, I definitely don’t think we’re going to need “domed, evenly climatized cities” (which don’t sound like much fun anyway) in order to have robot cars, but things are definitely going to need to change rather a lot in urban and suburban areas before robot cars can really make the splash they ought to in order to enter common use.

Initially, this might mean something like “automated valet” services (which Templeton mentions) that will park your “smart” car in a garage when you arrive at your destination, and I can see something like this happening with something resembling existing infrastructure (in some parts of the world/country). Later on, though, we’re going to run up against the matter of where people will want to live vs. where they want to shop, eat, go to school, work, etc. — and that might entail larger re-builds of roadways and other current routes to support greater automation.

I definitely look forward to following further developments in this area!

Robots: Evolution of a Cultural Icon

On Saturday, April 26, 2008 I visited the Robots: Evolution of a Cultural Icon exhibit at the San Jose Museum of Art. I’d been quite excited to go (being a shameless robo-fangirl and all) and the exhibit did not disappoint.

Matt (my steadfast and very patient Significant Other) and I arrived in downtown San Jose shortly after noon, where we joined up with two local friends and proceeded to catch a quick lunch prior to entering the exhibit. A large banner hung on the front of the museum displaying a gigantic image of a metal robot with a clock embedded in its chest. The connotation was unmistakable: here, there be robots.

There were no “No Photography!” signs up at the museum, so initially I had my camera out, and managed to get two or three shots of several exhibits before a museum employee informed me that picture-taking was verboten. I apologized and put the camera away, and do not plan on publicly displaying the exhibit photos I took (in deference to the Lords of Copyright), but you can still view images of some exhibits on the museum’s Web site. The museum has also released an online video series which includes a fair bit of exhibit footage and commentary.

First Impressions

The exhibit includes paintings of robots, sculptures of robots, quilted robots, model robots, toy robots, drawings of robots, metal robots, and plastic robots.

Implementations range from the simple line drawing to the highly complex electromechanical avatar.

One of the latter is equipped with two flat-screen monitors, each displaying a large humanlike eye (and yes, the eyes follow you).

Another is constructed almost entirely of small CRT television monitors, each showing an identical animated pattern flashing through endless cycles of decidedly psychedelic imagery. The CRT-monitor ‘bot was rather unnerving to stand near — not because of its appearance (I was actually quite excited at all the power strips and outlets all over it, as I am totally Arthur Weasley when it comes to electrical plugs and sockets), but because of the massively multiplied high-pitched whine chorus emanating from all those CRTs.

I don’t know if the artist was trying to make a statement about the pervasiveness of electronic “noise” in the world these days or whether that particular piece was there to keep bats away, but it was definitely one of the more abstract pieces in the exhibit.

Another piece is humanoid in form, mostly metal in its construction, but adorned with a pair of deer antlers, one on each side of its head: a mechanized Herne. In its belly behind a clear plexiglass cover sits a smaller metal humanoid, pumping and pedaling away so as to drive different but coincident motions in the larger figure. That one evoked all kinds of weird associations, but most predominantly it seemed an irreverent wink at the notion of the homunculus. And it was probably one of the most damn-cool looking things I’ve ever seen in an art museum.

On the “low-tech” side of things, a particularly impressive structure stands nearly ceiling-height (in a room with a very high ceiling); it is constructed entirely of Styrofoam package inserts from actual electronic products. It presides over a circle of surrounding, smaller Stryobots and several tables at which visitors are invited to build their own model robots out of provided Lego bricks.

A quote on the wall reads: We Were Promised Robots, in reference to the contrast between the retrofuturist-nostalgia version of a robot-enhanced reality and the actual present and emerging era of pervasive electronics that, while certainly more impressive in some ways than previous generations could have imagined, is decidedly different from what was imagined.

In reflecting upon that contrast, I cannot help but feel at once that things have turned out betterthan imagined in many respects (and I’m not just talking about iPods and flat-screen TVs here, but about civil rights, womens’ rights, gay rights, etc.), but that we as a species still have a tremendously long way to go with regard to things like resource distribution, respect for our neighbors (regardless of who we are or where we live), and sustainable development. I’m not sure how to feel (much less what to do) about the fact of my having a nice shiny computer, a comfy apartment in a reasonably safe neighborhood, and easy access to art museums, while half the world population doesn’t even have access to flush toilets.

Did the futurists of the 1950s and 1960s (who envisioned widespread atomic superabundance) expect fair and ethical resource-distribution systems to come about by magic, or perhaps with the help of friendly robot assistants?

The Robot as Self and Other

In film, art, and literature, robots have appeared to cross all cultural and class lines. Sentient robots in stories have been portrayed almost as a kind of enslaved underclass in some scenarios, even as they’ve busily worked toward taking over the world in other scenarios.

Iconic robots can serve to reflect ubiquitous anxieties present in modern industrialized culture: perhaps unresolved guilt and fear about the consequences of maintaining an underclass or worker class (whether that be the continued and un-addressed exploitation of sentients, or the classic “robot uprising”), as well as a sense that maybe the collective will of the machinery we construct might be essentially shackling us to its agenda, rather than the other way around.

But just as our machines do in life, the robots represented by the exhibit pieces defy confinement to any one role or position, and instead overlap and inhabit multiple contexts. One universal feature of life (especially human life) is that it co-opts pieces of its environment over time, as is required to maintain itself as a process. Humans are particularly adept at this, to the point where we are not only becoming increasingly able to maintain ourselves in the face of circumstances that would assuredly have killed our ancestors, but also increasingly confronted by the blurring of boundaries between self, tool, and resource.

Fictionalized and aestheticized robots are perhaps the ultimate confrontation in this regard, existing as they do somewhere between extension-of-self (in tool form) and autonomous “other”, and frequently muddling this distinction entirely.

The Robots: A Cultural Icon exhibit provides many representative examples of this muddling.

One stark set of line drawings (done in classic Chinese pen and ink style) shows a humanlike figure sailing through the air, borne on the back of a birdlike robot, into which another humanlike figure has been inserted or merged. It is impossible to tell who is calling the shots (pilot, craft, or passenger) and perhaps the point is that it is not necessarily useful to attempt to delineate such things in the first place, at least not in any absolute sense.

On a wall in the museum, a projector plays the Björk video, All is Full of Love on infinite repeat. The inclusion of this video in the exhibit was somewhat surprising to me at first (as you don’t exactly need to go to a museum to access a popular music video these days), but in the context of the exhibit, viewers are encouraged to consider All is Full of Love in a mindset which is less MTV and more imagery-focused. I’d seen this video before and found it at once unsettling and gorgeous, and watching through it again my reaction was similar.

However, with this viewing of the video I also noticed a lot more of what I like to refer to as “stuff English teachers love”, by which of course I mean “stuff that can be interpreted as having sexual connotations”. Nevertheless, there is no human flesh to be seen in the video; the closest we get are the stylized humanlike faces of the two gynoids that move through varying stages of construction and deconstruction and entanglement and separation.

The video is also interesting in that it simultaneously shows robots of an obviously fantastic nature, and robots that are more realistic and familiar to anyone who has ever seen an actual industrial robot. The gynoids look more human than the faceless hydraulic mechanisms disassembling them in reverse, but who built the mechanisms? Which type of machine more properly suggests the usual output of human will? And more importantly, what does each of us want the output of our will to look like?

On a less serious note, the exhibit also provides a set of easels at which visitors can sit and draw their own “robotic self-portrait” with provided crayons. Two mirrors printed with “robot face” outlines hang on the wall facing the easel seat, presumably so we’ll be compelled to line up our actual faces within the outlines. This was all a bit silly, but too much fun to resist; I spent about two minutes sketching a (very rough) AnneBot.

The idea of the exercise is to draw a robot and think about how your robot reflects how you see yourself. I’m not exactly sure what my result says about me (if it says anything at all), but it was neat to have the opportunity to sit and play with crayons in a public place. And the exercise did get me thinking about how robotic imagery has historically tended to communicate things about both its creators and the cultures they inhabit.

Robots That Think And Feel

Text painted on one of the exhibit’s walls declares: “The bipedal humanoid robot with fully developed artificial intelligence may be realized in the near future”.

As is commonly the case with declarations such as this, little is offered in explanation of what “artificial intelligence” actually means, let alone what it means for such a thing to be “fully developed”. My guess, though, is that when people make predictions about “fully developed AI”, they are envisioning artificial “brains” that function exactly the way human brains do, albeit on some substrate other than biological wetware.

Such “AIs” have existed in literature for quite some time, however, they are conspicuously absent from the real world. My guess is that they will likely continue their absence indefinitely. Even if “artificial humans” were feasible to construct, humans of sufficiently differing internal architecture seem to have a tremendously difficult time communicating effectively with one another — even the oft-cited human superpower of “empathy” seems in practice often restricted to persons sufficiently similar to the self.

So the question emerges: how do robots, both fictional and actual, reflect how humans think and feel about the very processes of thinking and feeling?

In some depictions, robots are assumed stonily indifferent and consequently feared. After all what could be more dangerous than an enemy who does not see you as an enemy, but as a pile of raw materials to be exploited or recycled? In other cases, the perceived hyper-rationality of the robot is valorized and sought as an ideal, “perfect” state in which the purity of reason might shine forth without the messy complexities wrought by amygdalae and endocrine systems.

As far as I’m concerned, both these reactions are rather puerile. Robots and emotion are inextricably intertwined, no matter how you look at it, and it makes little sense to infuse them with such superlative and impersonal power whether you’re drawing them or thinking about actually building them. So it was refreshing to see at the exhibit a range of different depictions, some of which went for direct subversion of the stereotypes.

One piece that compelled much in the way of lingering and staring on my part was a small, unassuming-looking “shadow box” hung on the wall in one room. Its area probably did not exceed a square foot; it commanded attention not by looming over you in the imposing manner of the giant Styrobot in the adjoining room, but by drawing you in like an open window into a miniature world.

A toy robot sits on a chair in this piece, in what looks like a handmade doll’s-house living room. Tissues (both boxed and used) clutter the area; the robot also clutches a crumpled tissue in his hand. A portrait hangs on the rear wall of the shadow box/living room, depicting (presumably) the occupant’s Robot Grandma. A tiny model television with a real, working screen plays clips from Fritz Lang’s Metropolis.

And if you look closely, you can see a tiny lacquered tear on the watching robot’s cheek.

Even if we are truly talking about robots as tools — actually emotionless mechanisms employed in the extension of human intent — we are still dealing with emotion-infused machines, as the emotions in that case are ours. (Sometimes our machines even prosthetically become parts of us as well, to the point where having someone else touch or take them without our permission feels like a bodily violation, because that’s exactly what it is.*)

And if we are talking about fictional robots equipped with some measure of autonomy via artificial-intelligence mechanisms, you would be hard-pressed to find a literary example of a robotic character that has not been anthropomorphized in some way. And a particular challenge for artists and roboticists alike is that of determining how to “blend” mechanical and human attributes effectively for whatever purpose the robotic character or actual robot is being invoked.

On that note, I’ve been to a few AI-themed lectures and listened to numerous episodes of robot-related podcasts (such as Talking Robots, which I highly recommend), and one thing that seems to be coming up a lot these days is the notion of robots being designed according to [typical] human reciprocity expectations.

What concerns me (a little bit) here is that perhaps the reason why we see statements like “We’ll have fully functional artificial intelligence in the near future!” on the walls of art museums is because so many public and popular demonstrations of robotics technology feature creations that set off human “comfort and familiarity” cues.

Of course this is not problematic in and of itself, but whenever I come across an article about how robots are beginning to demonstrate social reciprocity, I can’t help but be reminded that actual existing people (who might not show these typical reciprocity signs in easily-recognizable ways, due to being autistic or otherwise atypical) are still being written off as “empty shells”.

Don’t get me wrong — I think robotics research is super neat, and I can see how studying human reactions to a robot’s nonverbal behavior might yield fascinating insights into multiple aspects of social cognition. But at the same time, I think it is interesting to look at the assumptions behind the display (or lack thereof) of certain “signals”, in humans and in robots.

Hence, one of the things I’ve always appreciated about “robot art” is how it often actually manages to acclimate people to atypical expressions of both emotion and cognition. Iconic robots do not always look or even act typically “human” (R2D2, for instance), and yet, people come to love them anyway.

Closing Notes

Robots: Evolution of a Cultural Icon definitely lived up to my expectations (which, admittedly, were along the lines of, “This exhibit will contain cool, robot-themed art pieces”). The exhibit was not large (it spanned only part of a single floor in the multi-story museum), but it didn’t need to be. I was actually rather pleased at how the setup and structure of the exhibit allowed visitors time and space for reflection on individual works — the pieces were not crammed or crowded together, and while there was a guided tour option, this was not mandatory. The environment was also quiet and clean and not sensory-overloading (hooray for sensory accessibility!). It probably took about two hours to go through the entire exhibit (and that time span included several instances of lingering a long while to examine particular pieces) — a pleasant length for a weekend afternoon outing.

I have definitely been inspired by what I saw, not only to write about it as I have here, but to keep exploring the cultural and artistic contextualization of robots in addition to the mechanisms by which actual robots operate in the real world.

After all, we and the robots we build, draw, and create as characters are essentially vectors along which the stuff of the universe explores different avenues of expression. And what is so strange, given that, about the idea that all (whether it be biological or mechanical) could indeed be “full of love”, as Björk’s video suggests, hopefully without irony? Perhaps the separations we try to enforce between what is “life” and what is hard cold material are in fact, overly facile.

In any case, it will be interesting to keep watching the interplay between real robots, humans, fictional robots, and robot-themed art as the world and its people change over time. And while there is no way to predict what shape this interplay may take in the far-off future, one thing seems likely to remain certain: our iconic robots have (and will continue to have) much to tell us about our individual and collective fantasies, fears, dreams, and priorities.

Facing the Quasi-Autonomous Robot Monsters Under The Bed

NOTE:This is a tidier edit of the essays first published here and here.

What If My Toaster Burns My Bagels Because It Hates Me?

Given the subjects I tend to focus on in my writing, I’m often asked a lot of questions regarding issues of autonomy, will, cognition, perception, robots, and personhood. These questions tend to be filled with fuzzy, difficult-to-define terms, and what’s more, they’re commonly asked by people with a clear agenda (whether it be making a case for the existence of “souls” and supernatural superbeings, or asserting that nothing matters because no choice is actually real because individuals aren’t really real). And, like the proverbial monsters under the bed, they sometimes keep me up at night trying to hash through all the various contingencies and semantic gymnastics even beginning to address them would require.

But at a certain point, thinking about the monsters (quasi-autonomous robot sort or otherwise) that might be under the bed (and how to avoid them) starts to become more exhausting and annoying than just switching on the light and proving that either there aren’t any monsters at all, or just getting to know them a little better if there are. So while this writing will by no means address these questions with “airtight” answers, it should at least give a sense of what goes on in my head when I approach them.

Decisions and Autonomy

Most dictionaries (see here for an example of this) seem to define “autonomy” in terms like independence, freedom, self-direction, and self-governance. I don’t have any argument with the dictionary in that regard, however, in this discussion, the “autonomy” I have most in mind is that which describes a discrete and independently-operating locus of consciousness, awareness, and thought.

In this sense, humans and cats and even mice can be said to be “autonomous”. Every human, cat, and mouse has some level of wholly private experience that no other entity can directly access. That is the usual sense in which I think about autonomy. There are, of course, other complicating layers and definitions on top of that one — some involving decision-making ability and legal sorts of independence from external coercion and control — but the basic unit of “autonomy” for me is the individual mind.

As far as what it means for an entity to be capable of making decisions for itself, that’s another question entirely. It’s also a question that depends on what you’d consider a “decision” to be, and whether you automatically require explanations of agency in addressing that concept. I would definitely allow that in some respects, entities (we can assume to be) wholly lacking in minds are, in fact, capable of “making decisions”.

Say you write a function in C that will output one string if a variable has a value of less than five and a different string if a variable has a value of five or more. Many of us would, at least colloquially, say that the function decides which string to output based on what its input value is. And if this function existed in the context of a program where any of multiple functions might be called in response to higher-level inputs, we might say that the program decides which functions it is going to call.

But if we dig a little deeper, it’s clear that the colloquial conventions in which program behavior is commonly discussed do not reflect the (arguably) “ultimate” sources of the program’s behavior. The software engineer writing the program in fact decides in advance what the program’s outputs are going to be. She decides what inputs will prompt which functions to be called, and she also decides what criteria will determine the outputs of each function.

But say we dig even deeper than that! Say the software engineer is writing her program according to a set of requirements handed to her by her boss. Her boss may not even know the C language himself; all he knows is what he wants the program to do. So he provides “inputs” to the software engineer in the form of requirements, which are probably written in “natural language” as opposed to code or pseudocode. The engineer then takes the requirements and processes them into a C program.

But let’s not stop there! Say the boss got the requirements from the customer over the phone. Furthermore, say that the customer speaks only Chinese, whereas the boss speaks both Chinese and English. The boss, in this case, has to translate the customer’s requirements into English so that the software engineer (who doesn’t know a word of Chinese) can understand them. This entails the boss having to make a lot of decisions regarding what the customer likely meant by certain turns of phrase, and it also entails the boss having to think about what points to emphasize most strongly so that the engineer gets a sense of the customer’s priorities.

Now, lest anyone think I’m veering into the realm of cybernetic totalism, let’s pause a moment. While one could indeed condense the software engineer herself into a code-producing box into which you put requirements and out of which you get a software package, and while one could reduce the boss into a Chinese-English requirements translation machine, surely it is apparent that there are discontinuities in this instructional chain. That is, as you move further away from any individual program output (let’s say, a string printed on the screen that reads either “X is less than five” or “X is greater than or equal to five”), things become less and less easy to determine, and far more subject to uncontrollable variables.

When writing a C program, the kinds of inputs the computer gets are constrained to a relatively narrow set — the programmer generally uses a keyboard interface to type the code in, and very specific rules of syntax must be followed in order for the program to do anything at all when it is compiled (much less the precise thing the programmer wants it to do). What’s more, as far as any of us knows, present-day desktop PCs don’t have anything like the “internal life” that we humans do, or like chimps or cats or mice do, even. Neither the computer nor the individual program being written is “autonomous” in the basic sense of having a private vista of self-aware reflection embedded in a larger reality — notions of the computer and program “making decisions” are found only in a kind of linguistic folklore rather than in literal points of fact.

Certainly one might suggest that “random quantum events” or short circuits or power surges might result in the program being written behaving differently than the programmer specifies it to behave, but essentially, the program’s outputs are severely and rigorously constrained by the programmer’s textual inputs.

The programmer, on the other hand, is autonomous in the sense defined at the beginning of this writing. She has a mind that nobody else can experience the way she experiences it. She can have thoughts that aren’t detectable by any other people or by any measuring instruments. And she can, in the most general folk sense of the word “choice” (the one that ignores the vast and convoluted and seemingly never-ending Free Will Debate), choose:

(a) whether or not to write the program at all

(b) whether or not to come to work in the morning

(c) whether to keep this job or seek another

(d) whether to make one function perform a particular task (or split the task across two functions, one of which calls the other)

…or any number of other options that will affect the nature of the program, up to and including whether or not it comes into being at all.

Similarly, the engineer’s boss can make a whole slew of decisions (from the vantage point of his autonomous perspective) that will also affect the fate of the program, albeit not as directly and obviously as the decisions made by the engineer will. He can, like the programmer, make decisions that result in the program not being written at all — e.g., he might decide not to give her the requirements because he wants her to focus on a different task for the rest of the afternoon.

He might decide to second-guess something the customer said on the basis of a perception that he knows slightly more about programming than the customer does (whether or not this is a wise move is beside the point for now). Etc. And by extension, the customer can choose to describe the requirements in one way rather than another based on how important he or she believes this particular project to be — e.g., s/he might be more thorough and concerned about making sure the engineer’s boss really understands the requirements, or s/he might just rattle off the requirements vaguely and carelessly due to feeling that the project is inadequately funded to begin with.

So far, I’ve described the program’s pseudo-decision-making process — e.g., the fact that the program branches at certain points, but not due to any kind of internally conscious self-reflection on the program’s or the PC’s part. I’ve also described the “volitional-feeling” choices made by the engineer, her boss, and the customer.

But there are other factors that can indirectly affect the program as well that come from the human agents in the instructional chain without necessarily “feeling like” choices.

For instance, if the engineer is tired or hungry, she might not consciously decide to make the program sloppier and less modular, but it might come out that way anyway because she’s not performing at her best. Similarly, if the engineer is well-rested and cheerfully sipping away at her Mountain Dew (provided generously by the company), the program might come out in a much slicker and more efficient form — again, without any conscious feeling on the engineer’s part that she’s choosing for the program to come out that way as a result of sentient and deliberate decision-making.

And if the boss is distracted by other projects when he’s taking the customer call, he might inadvertently write down the requirements sloppily. He might make typographical errors by mistake. He might hear the customer say a Chinese phrase he doesn’t recognize, at which point he’ll look it up in his Chinese-English dictionary, and in doing so discover that there’s another phrase he got wrong earlier in the conversation.

In any case, the program is going to be affected by things the boss does and various inputs he might receive and consider on a non-volitional-feeling level. The same goes for the customer — their instructions might seem to say one thing rather than another based on whether or not the customer has a scratchy throat, or based on background noise in the customer’s or boss’s office. And so on, and so forth.

Will, Free And Otherwise

Someone once asserted in response to something I wrote: It seems to me that if everything is contingent upon determining material processes, then everything is determined and true decisions don’t exist.

Here we encounter the can of worms that is the Free Will Debate. What is a “true decision” seems quite subjective, and I certainly cannot hope to put forth a definition of “true decision” that everyone will necessarily relate to or agree with. The best I can do is describe what things seem most like “true decisions” based on my own interpretation of what it means to make a decision as a conscious, autonomous entity.

In my example above involving the programmer, I’d be plenty satisfied to classify the “volitional-feeling” choices made by the engineer, the boss, and the customer as about as close to “true decisions” as humans can assert the existence of. Certainly, one can try and claim that everything about that person’s life up to the moment they made the decision about this program actually “determined” its final state, and that there was nothing truly “volitional” about their decision, but one cannot deny that in everyday life, things we do on purpose feel qualitatively different than things we don’t do on purpose.

Usually, that is. People can, after all, be coerced (by other people, by physiological inputs registered subconsciously, etc.), and in some cases people might “feel like” they are acting volitionally even when they’re mainly responding to deep, low-level impulses like fear and reward. But at the same time, people are also capable of emerging from coercion and being able to look back and identify when they were actually being coerced (or compelled) and when they weren’t.

In light of all that, even if you’re a “hard determinist” in the “we’re all just objects going through the unconsciously-programmed motions that could have been extrapolated at the moment of the Big Bang if only someone had had a big enough computer, and nobody really makes any kind of meaningful choices at all because of this” sense (I’m not one of those, by the way), I don’t see why you’d want to ignore the many and various levels of “feelings of volition” and emergence from/descent into coercion that humans and presumably other entities seem to experience.

Clearly, there’s something interesting going on in the brain across all these experiences. And there are plenty of philosophical and ethical implications here: personally, I think that an “ideal” ethical state with regard to personal autonomy is the one in which coercion is minimized, and in which the individual is has access to whatever information she might need to make maximally-informed decisions.

Tools and Toys, Bodies and Minds

Tools are a particular class of objects not normally considered autonomous individually, but which are used by agency-possessing individuals in the fulfillment of particular goals. While tools can certainly be anthropomorphized (I know of several people that have named their cars, and most people who use computers regularly can’t seem to help but project humanlike emotional maps onto their machines, particularly when said machines seem “cranky”).

Still, thinking of tools as a particular class of objects that can serve as extensions of self (or extensions of will, perhaps) — is very useful, particularly when viewing the “person” as embedded in and part of the environment, as opposed to somehow distant from it.

My notion of personhood, or at least one formulation of it, can be stated thusly: I am a small piece of the universe observing itself.

If I had to sculpt a geometric model of reality (a daunting task if there ever was one!), one possible model might resemble a big rubber sheet pulled to tiny points in some areas, stretched thin in others, pushed to a smooth roundness in still others, etc.

Basically, while parts of the sheet would certainly have their own identities and local characteristics, and while each part would consequently be an entity in its own right, all parts and the interconnections between them would still comprise a larger entity.

Sticking with that model for now, let’s say a person is initially represented by a point on the sheet pulled sharply upward. As this person grows, develops, learns, and interacts with the other local surface irregularities, relationships will be established with those irregularities. Depending on the type and nature of each irregularity, the relationship between it and the person will effectively change the shape of the person in some way. Some irregularities might make the person-representing point poke out further from the plane of the sheet, whereas others might smooth it out and draw it closer. Yet all the while, the person maintains a sense of continuity, and certain aspects of his trajectory through time will always show the influence of his initial conditions.

And just as the sheet itself provides fertile ground for a tremendous diversity of individual forms, each person-point is simultaneously capable of evolving in any of a fantastic array of directions and of maintaining a distinct sense of continuous personhood.

Additionally, every person, generally speaking, sees “ownership” and control of his or her body as a precious and deeply-held right. Given the manner in which tools are employed as extensions of will, they are also in many respects extensions of the body — and most people would be hard-pressed to truly define where “they” end and where their tools begin. It’s rather strange to think about it in this way, but honestly, I would feel as if I’d undergone some sort of amputation if my computer’s hard drive were suddenly and irrevocably wiped!

But if tools are a special class of object, do they differ from “machines” in general? If they can be considered parts of beings, and subject to the decision-making processes of those beings, what does this in turn suggest about the nature of object-boundaries and agency?

Invoking the “sheet model” again, perhaps tools would represent those irregularities that can be effectively “absorbed” by the person-points to the point of becoming part of them. Similarly, tools can also be discarded and/or removed when the person no longer finds them useful, or when they begin to pose some problem. The “body” over time cannot be said to be a static clod of matter — rather, the body is a dynamic process that winds its way through spacetime, memory and sensation incrementally bridging the piecewise generations of cellular turnover. In some respects, cells and eyeglasses and hair and prosthetic limbs and tattoos and iPods and lungs are all of the same ilk: things that individually are not persons, but that can be aspects of persons that in turn define those persons — at least on a moment to moment basis.

Did I Say Overlords? I Meant Protectors…

My earliest concept of what a “robot” was came, unsurprisingly, from science fiction. I basically saw robots as “metal people”, and that’s often how they were presented on-screen. It didn’t even occur to me as a child to question whether or not “robots” had consciousness or agency (but then again, I also tended to see pretty much everything as “potentially alive”, so that isn’t too surprising). I also had some robot-themed toys growing up; one of them was an educational machine called Alphie II, and I had a number of robotic Star Wars action figures. My brother also had a really neat little gizmo labeled “Robot Factory” that consisted of one large robot with a built-in mechanism that sent several tiny robots on an endless roller-coaster ride along a track that snaked around its body. So basically, I can’t remember ever not being around what I’d term the “robot phenotype”.

But I didn’t learn about “real robots” until I was quite a bit older, and honestly, I was rather surprised at how “primitive” they seemed, as well as at how they were used. I think the first “real robot” I saw was on a TV show about automobile manufacturing (or something along those lines), and it just looked like a multi-jointed yellow mechanical arm-thing that moved according to the motives of whoever had programmed it to build cars.

So basically, every robot I’ve ever made the acquaintance of in real life has been either an industrial robot, a toy, or an experimental “kit” bot equipped with a few sensors and/or actuators. And even the more impressive robots I’ve heard of (such as the DARPA Grand Challenge cars) haven’t been autonomous in the sense that humans, many animals, and fictional robots (like R2D2) are — at best, they can do one thing quite well, but they aren’t capable of deciding they’d rather do something else, and it seems to me unlikely that they’ve experienced existential despair over this fact.

Clearly, robots are commonplace today — just not autonomous robots. And yet, there seems to be a kind of background assumption that not only would autonomous robots be desirable in some contexts, but autonomous robots would somehow represent a more “advanced” robot in some significant way. But would humans actually want to build truly autonomous machines?

Humans tend strongly to use technology prosthetically — that is, as the collective pool of knowledge about How Stuff Works (and How To Make Stuff Do Other Stuff) grows over time and is communicated more effectively to more and more people, the trend has been toward applications that allow people to assert their ideas, desires, and will over a greater distance, or with greater strength, or with greater precision, than was feasible before the adoption of the application. The trend has not (at least from what I’ve observed) been toward trying to — forgive the terminology — “ensoul” machines, except perhaps in the context of university lab projects, none of which have exactly panned out in that direction so far.

The world is already pretty well populated by autonomous agents (animals), and half the time it seems like humans are more concerned with trying to decrease the autonomy of these agents than with increasing it. Hence, the idea of large groups of humans deciding to create autonomous robots and “release them into the wild” for the sake of allowing new life to flourish seems a mite farfetched.

Plus, there’s the ethical problem with creating an autonomous entity in a lab — as far as I’m concerned, once you’ve established that an entity is autonomous, you have no right to keep it confined (in a lab or otherwise), nor is it acceptable to subject it to non-consensual or coerced experimentation.

This fact alone makes it seem unlikely to me that truly autonomous robots are going to be a major human goal anytime in the foreseeable future — right now, robots outside the movies are pretty much thought of as being “tools” (extensions of human will), and people don’t want their tools to talk back or say “No!”.

Progress, Rights, and Personhood

Part of what is meant by some uses of the word “progress” is a kind of ongoing emancipatory process that involves seeking to recognize more and varied forms of personhood, to develop and provide tools that assist with individual flourishing, and to ensure that new technological developments (or proposed developments) benefit more than a few privileged folks.

So while I certainly enjoy talking and thinking about robots, and while I would be overjoyed to someday wander through bright jungles populated by colorful mechanical fauna who have been set free to flourish as beings in their own right (rather than as means to some “end”), I think it’s important to stay grounded in the present when considering what actions would likely lead to the greatest progress in the sense described above.

“Real” autonomous robots would, after all, be non-tools — and non-tools (people, other autonomous entities, etc.) cannot be used, absorbed, and/or discarded by others in the sense that tools can. One reason I find myself intrigued by “roboethics” discussions these days is actually tied into the very real civil rights struggles faced by already-existing persons. And again with the disclaimer that this is a science fiction scenario, I can’t help but wonder whether humans are at the point of being able to recognize very atypical persons (such as sentient robots would be) as non-tools. My guess is “not quite”, and I see a potential (if not exactly immanent) danger of people creating entities that are autonomous and sentient, but that are not acknowledged as such. It’s not as if there isn’t a precedent for this.

Some of the worst abuses in history have been perpetuated as a result of people trying to use, absorb, and ignore or deny the personhood and autonomy of other people. Ethnic minorities, women, children, disabled persons, and individuals of any configuration in positions of disadvantage for whatever reason have all had to deal with being treated like tools (in the sense of being considered non-autonomous, and only worth what they can “produce”, whether it be slave labor, sons to carry on the family lineage, or in the case of disabled persons, “proof” of full personhood in the first place).

And this isn’t something we’re exactly past as a species yet. Regardless of the general sense I still have that all things in reality have a kind of “character” to them, I’m well aware that some things are tools, and that people are not tools, though tools can be extensions of people. Robots, perhaps, are interesting because they stand in a strange area where they have the potential to be considered either non-autonomous things or people (or both, context permitting!), depending on what direction the research goes in.

And given this, I think that anyone who finds himself or herself obsessing over “robot rights” would do very well to learn a bit more about general civil rights. Not only is a much greater consciousness of civil rights gravely needed in the present, but it is going to be vital to broaden the common concept of what a full person is if anyone really wants to see the kind of wide-ranging prosthetically-enabled vibrant diversity that may at least become physically feasible within the lifetimes of many alive today.

Science Fiction, Speculation, and Living Machines

It wouldn’t be an exaggeration to say that I’m a huge science fiction nut (and that this has been the case for practically as long as I can remember). I grew up being exposed to Star Trek (both the original series and the Next Generation series when that came out), Star Wars (which I became utterly obsessed with at the age of eight), and other miscellaneous media.

Still, I don’t watch a lot of television. I don’t have cable in my apartment, and the only channels we do get here come in fuzzily at best (and I have zero interest in having a zillion channels to flip through — it actually drives me nuts when people do that, so I’m certainly not going to enable it in my apartment!). Much of my science-fictional education (if you can call it that) has been through books. As a kid I started off reading whatever books my father happened to have lying around (a favorite was Roger Zelazny’s Amber series), and I’d have slept in the library if I’d been allowed to.

But I do like a good movie now and then, and I am always pleased to find a fun series to watch episodes of on DVD with my dinner. And since I managed to exhaust the available Joss Whedoncatalog last year, recently I went in a slightly different direction and started watching the science fiction series Farscape on DVD.

So far, what I’ve seen of Farscape has been delightful. The first few episodes were a bit rough (in terms of both dialogue and special effects), but the show rapidly picked up momentum and is definitely carving out a special niche in my brain as I move into the latter half of the second season. Nobody could argue that Farscape is exactly hard SF (there’s a ridiculous amount of hand-waving at times with regard to how particular technologies and manifestations of alien biology work), but it doesn’t need to be in order to be very good at being what it is: a fun space-fantasy that is equal parts imaginative romp and comfortable, familiar territory.

There are some aspects of Farscape that remind me of Trek, some that bring Firefly to mind, some that invoke visions of The Fifth Element, and some shades of Stargate SG-1, but the show certainly has plenty of distinguishing elements in its own right.

One of the more intriguing elements of Farscape is the ship the main characters fly around on — a “biomechanoid” creature known as a leviathan. “Living ships” are nothing new as far as science fiction goes, but I haven’t seen very many of them on television outside various animeseries, where there often isn’t any clear line between meat-based life and metal-based life at all.

Now, of course the definition of “life” varies a lot depending on who you ask, but in the context of Farscape, the leviathan (named Moya) is sentient, capable of experiencing emotion, capable of reproduction, and able to self-repair to some extent. She can also communicate fairly directly with her Pilot (who is, quite literally, bonded to her through a network of neural and other physical connections), though non-Pilot crew members must communicate with the ship through the Pilot since their connection to her isn’t as direct.

I realize that “living ships” are most certainly confined to the realm of science fiction as far as the world as we know it goes. But as a character in Farscape pointed out early on in the series, humans have long had functional/”contractual” relationships with other animals, such as horses (albeit with some complicating ethical problems; I personally don’t like the way humans often assume that animals are here for our “use”, but at the same time, I do believe humans and nonhuman animals can reach states of mutual understanding and friendship).

The idea of a “living ship” to traverse space with might be fantastic now, but it’s still fascinating to think about in terms of what the various implications might be of this arrangement.

In Farscape, the leviathans are “created beings” — their race was brought into existence by another alien species who intended them to act as “emissaries of peace”. They are not equipped with weapons, they develop symbiotic bonds with their Pilots, and they grow to better accommodate their crew over time. They can feel happy and sad, they can experience loyalty and disappointment, and they possess a survival instinct and a drive to protect their young. They seem to enjoy providing passage to those aboard, but they also have minds of their own — they don’t so much “take orders” as go along with what the crew wants (since it allows the leviathan the opportunity to continually explore), unless the crew’s wants conflict with the leviathan’s own desires, agenda, and sense of self-preservation.

Now in looking at the ethics surrounding leviathan-crew relationships, two models are presented in Farscape. One model sees the leviathan as something that is simultaneously a tool and a friend (i.e., another conscious being to be treasured and related to on his/her own terms). Here, the leviathan is given the opportunity to bond with a Pilot and carry a crew and explore, while fully conscious and autonomous. In this model, the relationship between the leviathan and everyone aboard is basically symbiotic; the leviathan provides life support and transportation for the crew, and their whims and goals provide the leviathan with new experiences and the opportunity to help others and (hopefully) advocate for peace and other positive notions.

The second model, however, is one in which the leviathan is “captured” and rendered either unconscious or semiconscious, and fitted with a “control collar” which allows the crew to direct him/her at their whim. Pilots in this second model are still present, but instead of being allowed to undergo the bonding process (which can take a rather long time, but which ultimately results in a painless and more effective communicative link), they are painfully and forcibly grafted to the living ships — and then subjugated by the crew, which leads to their being threatened with everything up to and including death as penalty for not following orders.

As I’ve watched Farscape, though of course I fully understand that I’m watching fiction, I’ve found myself feeling very sympathetic toward the leviathan. The second model described above pretty much enrages me, and whenever Moya sustains damage on the show, it makes me flinch a little bit. Call it silly if you like, but that’s just the way it is for me.

When I was little, my overall view of the world vaguely resembled panpsychism — that is, I didn’t really distinguish between “people” and “objects” in my environment, and consequently I saw everything as potentially “alive”.

I’ve read some studies that interpret this tendency in autistics as indicating that we view peopleas essentially inanimate, but I’m really curious as to whether my experience could actually be the more common one. Basically, I didn’t prioritize people over books or trees or cats in the sense I was expected to (I was often lectured for reading — or wandering around looking at things — in group settings rather than “socializing”).

However, this does not mean that I ever saw people as hollow or empty; as far as I was concerned, nothing was hollow or empty; everything, from the smallest piece of broken crayon to the largest lichen-crusted rock, was suffused with a kind of unique character. People just didn’t always stand out as the most interesting things in the immediate environment.

One reason I suspect I’ve always been drawn to science fiction and fantasy to some extent is because these genres often present worlds in which nonhuman forms and intellects are more accepted as a matter of course. When I imagine what it might be like to fly around on a living ship, the thought is strangely comforting.

That aside, I’ve been aware of for a long that while many people think of conscious awareness as one of the most “advanced” attributes an entity might possess, there really isn’t any push to make all the machines humans use conscious. Certainly, there’s plenty of motivation to create more “intelligent” systems (i.e., self-navigating cars), but when it comes to things being designed primarily as tools, I’m not sure most people would welcome self-awareness as an attribute of those tools.

Historically, humans have tended to create things (or enable/nurture their existence) for two major reasons: because we need a new tool to help us accomplish a task (the tool, in that case, is basically a means to an end), or because we want to bring something into existence for its own sake. Of course one could argue that everything humans do is a means to some fundamental end (like “replication” or “happiness”), but this kind of argument seems to me a bit arid and limiting. Regardless of what our dopamine levels are doing, the experience of making or acquiring a tool to accomplish a specific task is a qualitatively different one than the experience of making or acquiring something that is going to be treasured as opposed to merely used.

Now, some people do actually treasure their tools (I and my dad are both extremely averse to throwing things away!), but even those of us who will keep the broken remnants of a favorite piece of hardware usually have a list of items we do consider “disposable” and/or interchangeable with other similar items. These items are things we expect to fulfill a specific purpose reliably, with minimal demands on our time and attention. People don’t necessarily want their obsolete computer hardware to “know” they plan to replace it once it wears out, and they don’t want their dying device batteries to demand Christian burials. In short, many prefer the tools they use to be tools, not friends or pets.

Your mileage may vary, of course. But I wouldn’t call it entirely premature or ridiculous to suggest that people start thinking about what increasing levels of computational complexity in their tools might imply, philosophically speaking. I’m not suggesting that our toasters and calculators are on the verge of “waking up”, merging with Google, and initiating an Appliance Revolution, but rather that it can’t hurt to at least imagine what nonhuman or even atypical-human consciousnesses might look like. At the very least, you might get a good science fiction story out of it!