Appily Ever After?

I was very interested to read this funny take  on psychology smartphone apps in the New York Times (by Judith Newman) – or more accurately, how NOT to build a psychology app. I just blogged about this general topic in my last post, and what struck me most about this article was the notion of time.

Image

Art by Emily Flake (published in the New York Times 4/5/2013)

This article seems to suggest that mental health apps should quickly and effortlessly facilitate our relationships, efficiency, and well-being. As Newman writes in the article:  “All of these apps require thought. Lots and lots of thought. Thinking is what I do all day long. I needed something that would turn my mind off, not on.”

Great point. Maybe we don’t want the app to be our shrink – because when we go to a therapist, we tend to have a set of expectations that involve spending a good deal of time and energy (unless we’re just looking for a medication fix). Apps, by their nature, are fast, easy, and mobile. So, most of us expect that a psychology app will be a shortcut to mental health. We shouldn’t have to spend time learning how to use the app or being on it too much – at least not so much that it’s taking away from “having a life.”

This view tells me that there is a potentially deep disconnect here: between what many of us in the mental health field think of as the promise of mobile health technologies and what everyone else thinks. Many psychologists see a future in which apps and computerized therapeutic tools break down barriers to treatment, which can be too expensive and intensive for many. For example, for the most common class of psychiatric disorder, the anxiety disorders, only about 20% of anxious people receive treatment! So, the psychologists are thinking, jeez, mobile technologies offer so many amazing possibilities for integrating mental health treatment into the daily life of people who are suffering.  Let’s create an app for that!

But we need to think through our approach carefully. If we just put the same old (frankly boring) computerized interventions on smartphones, will that actually help us reach more people? How many will choose to use these tools? Maybe some, but perhaps not many. Perhaps what most of us want from an app is the digital and interactive version of the self-help book – you can take it or leave it,  pick it up and put it down after a few minutes and still get something from it, and which doesn’t feel like just another source of techno-burden.

So, what is the take-home message for the mental health professionals? Make it fun, make it fast, and make it effective or get back to work on making traditional treatments better.

Gamifying Mental Health or: Mental Health – We Got Game

I just attended the second annual Entertainment Software and Cognitive Neurotherapeutics Society (ESCoNS) conference. Say that five times fast.  This conference brought together people in the gaming world with cognitive neuroscientists. I went because I’m developing (and testing) an app that I believe can help people reduce stress, worry, and anxiety in their lives. In addition to more deeply exploring how to make mental health truly fun, I felt that I was seeing the future of mental health unfolding before my eyes.

Gamifying mental health

Here are four ideas I think will change how the field of mental health will look in a decade (or less):

1. Mental health care WILL BE gamified. The mobile revolution and app zeitgeist have changed how we get things done. We want an app for everything because we want our life mobile and streamlined, and the minute we think we want to do something, we want a device to help us do it. We also are trusting ourselves (and our networks) more and professionals less. This is the self-help movement taken to a new level. If we can seek mental health support on our devices rather than through a professional, more of us will do so. This plays into our growing tendency to feel more comfortable with devices than with others – this may be good or bad, or somewhere in between, but this is how it is.  I believe that it is not whether mental health care will be gamified, it is only a question of how and when.

2. Fun will motivate mental health treatment seeking. Scientists interested in human beings understand how to break something down into its component parts (whether an idea, a behavior, or a biological response) to study it, but scientists are not trained to construct something that is fun and that motivates people to come back again and again. That is art and intuition, combined with a lot of experience and good old-fashioned luck. If we want to reach the greatest number of people, and help them integrate mental health interventions into their lives, we need to make mental health fun.

3. Training your brain….with video games? The idea that you could train your brain with video games is still perceived by many to be in the realm of science fiction. But if you think about the fact that every experience we have, particularly repeated experiences, change our brains – why wouldn’t a video game? This reflects the important concept of neural plasticity – that the structure and function of the brain is malleable and changeable not just in childhood, but throughout the lifespan. In addition to games that can train different abilities (e.g., attention in kids with ADHD) technologies like virtual reality are being used as safe and effective ways to treat everything from addiction to post-traumatic stress disorder.

 4. The Emotional Brain is a “buzzing” target for intervention. In the 20th century, psychology was dominated by cognitive theories of how the brain works and what causes mental illness. Emotion was a little blip on the screen, an irrational irritant to the otherwise rational, predictable, and orderly domain of the thinking mind. Now, that irritant is an increasingly important focus of research. For example, not much more than a decade ago, economic decision making was understood as a “rational” process. Now it’s assumed that emotions influence our decisions, for better and for worse, and the task is to figure out how. The effect of emotion is not “irrational.” Rather, it reflects the fundamental integration between our ability to feel and to think. Without one, the other is deeply impoverished. As an emotion researcher, my colleagues and I are happy everyone has caught up – it’s about time! Emotions are the engines of our lives – and of psychopathology. No real living happens in an emotional vacuum.

It was clear to me from the conference that there is an emerging field in which the gaps between clinical psychology, cognitive neuroscience and entertainment are being bridged. This field is fundamentally interested in the emotional and social brain and “healthy emotional brain architecture” will be the goal of many computerized, gamified interventions. Increasingly, people predict a (near) future in which games will routinely be prescribed in the doctor’s office, and may eventually replace the office visit. If we can change our emotional brains, we can change ourselves. At least, that’s what many are counting on.

 

Rebel Without a Status Update

I am fascinated by the psychology of Facebook status updates. There are many reasons to make a status update. One reason, of course, is obvious – let others know what you’re up to or share something that’s cool. For example, if I did frequent status updates, I might decide to post “Buying a fantastic ½ pound of Australian feta at Bedford Cheese Shop on Irving Place – should I up it to a pound?!” (and seriously, it is incredible). This may be an interesting snapshot of a day in my life, but these types of status updates are exactly the ones that tend to annoy me for some reason. Even the most benign version of this feels like TMI.

Why? Status updates are for many an instinctive way to reach out.  A recent study even showed that increasing the number of status updates you do every week makes you feel more connected to others and less lonely. Seems like a good thing! Moreover, it’s consistent with what seems to be our new cultural comfort zone – being virtually seen and heard by a loosely connected group of people we know (or sort of know) as our “social network.” This virtual network is the social status quo for many of us, and certainly for many children growing up today.

I believe one consequence of this is that no one wants to be James Dean anymore. Putting it another way, maintaining privacy and being a strong silent type, like Dean, are no longer alluring ideas to us.  And when I thought of this, I realized why I don’t feel fully comfortable with the status update culture – I am a proponent of the James Dean School of Sharing Personal Information in Public: motto, the less the better. I like understatement, privacy, the choice to share with a few and retain privacy with most.

 Image

It’s no coincidence that as a culture, we don’t fetishize James Dean any more. Many of today’s icons (some of them “anti-icons” because we love to feel superior) are people who humiliate themselves, who will tweet that they’re on the toilet and what they’re doing there, who end up in compromised positions, and happen to have pictures and videos of those positions, which then promptly go viral (funny how that happens). James Dean would have disapproved.

James Dean himself would have been very bad at social media…..or perhaps very, very good. Very bad, because he would have had little to say, and would have hated the constant spotlight and social media culture of ubiquitous commentary and chit chat. On the other hand, he might have been very good at it because he would have been the Zen master of the status update, expounding with haiku-like pithiness. An imaginary James Dean status update:

James Dean…….

Old factory town

Full moon, snow shines on asphalt

#Porsche alive with speed

But seriously, while he probably wouldn’t have written haiku, perhaps he somehow would have figured out how to use sharing to create a sense of privacy, because a sense of mystery would have remained intact.

Yes, the status update is a beautiful thing. We have an efficient and fun tool which allows us to reach out to others, curate our self-image, and think out loud to a community. But I wonder if we’re starting to lose the simple pleasures of privacy, of knowing less and wondering more.

Downton Abbey: Television for the Internet Age?

I love Downton Abbey. It hits a sweet spot of mindless pleasure for me. Yes, it’s really just a British-accented Days of our Lives, but it’s wonderfully acted, soothingly English, and with a few nice clever twists. In honor of the US premiere of the third season last night, I thought I’d bring my interest in things digital to bear on Downton Abby. “How?” you might ask. It all starts with Shirley MacLaine.

Shirley MacLaine, who joined the cast for the third season (already aired in the UK but just now airing in the US), was recently interviewed by the New York Times about why she thinks the show has captured the devotion of so many. As most of you probably know, it’s been a huge, surprise international hit. If I have my stats right, it’s one of the biggest BBC shows ever.

She made a comment that caught my attention. From the interview (verbatim):

Q. What about the show hooked you in?

A. I realized that Julian [Fellowes, the “Downton Abbey” creator and producer] had either purposely or inadvertently stumbled on a formula for quality television in the Internet age. Which means there are, what, 15 or so lives and subplots, with which not too much time is spent so you don’t get bored, but enough time is spent so you are vitally interested.

Photo: Carnival Film

This comment alludes to an idea that we’re all familiar with – because we’re constantly multitasking and skimming huge amounts of information in a superficial way in order to wade through our daily information overload, we have developed a preference for short snippets of entertainment rather than more in-depth (read intelligent/complex) material. We just don’t have the patience or bandwidth anymore for anything longer or more involved.

I think linking up the popularity of Downton Abbey with this notion is an interesting idea. Of course, I have no basis upon which to say whether Ms. MacLaine is right or wrong, but my instinct is that she is not quite right. Although it’s hard to avoid the conclusion that much of our entertainment has evolved towards less depth and more superficiality over the past decades, this drift towards the superficial precedes the internet age. Soap operas have been popular for a long time. Reality television was a well-entrenched phenomenon before the dominance of mobile devices made multi-tasking a daily reality. And come on now; look at the TV shows from the 1950s or 1960s: Not exactly sophisticated material across the board.  How much have we actually drifted towards the superficial? Maybe we’ve just always been here.

So, for me, this explanation doesn’t hit the mark. However, another way to interpret Ms. MacLaine’s comment is that we love having “15 or more subplots” (to avoid getting bored) simply because we enjoy multitasking. It’s not that we CAN’T focus our attention for a long period of time. We just don’t like to. Perhaps we prefer shifting our attention because it feels better/easier/more familiar to divide our attention among several things. Perhaps we just want to have it all.

In illustration, yesterday, I showed my four-year-old son Kavi a picture (on my iPad) of some of his friends. He liked it a lot but there was something about it he didn’t totally understand (it was a joke picture). Whatever the case, he thought “it was cool.” His dad, Vivek Tiwary, and he were having a little boys’ time watching Tintin, so I started to walk away with the picture. He completely balked at that claiming he wanted to look at the picture and watch Tintin at the same time. I asked him how he’d do that, why he would want to do that, etc,…. No coherent answers were forthcoming except his claim that “it’s better this way.” And indeed, he proceeded to watch the movie, glance down at the picture on my iPad, watch the picture, glance down….for the next several minutes. He seemed to be enjoying himself. He seemed to feel it was better this way.

For me, the take-home message here was that for my little guy, more was just better. Maybe that’s the secret of Downton Abbey as well: it’s just a whole lot of whatever it is that makes it special.

 

Islands in the Stream: A Meditation on How Time Passes on Facebook

Shortly after the terrible tragedy in Newtown, I received email notifications that my (designated) close friends on Facebook had made status updates. Scrolling through my news feed, my friends expressed the range of emotions that we all felt – horror, sadness, distress, anger, and confusion. Later that day, I popped onto Facebook again and was jarred and a little upset to read that friends who seemed to have just expressed horror and heartbreak were now posting about every day, silly, and flippant things.

Now, why should I be jarred or upset? Hours had gone by. After three, or six, or ten hours, why wouldn’t we be in a different emotional state, and why wouldn’t it be ok to post about it? I started to think that it was not my friends’ posts that were at issue here. Rather, it was the nature of how I perceive the passage of time and sequence of events on Facebook. A couple aspects of this came to mind:

Facebook time is asynchronous with real time. Time is easily condensed on Facebook. Events and updates that might be spread out over the course of a day or several days can be read at a glance, and therefore seem to be happening almost simultaneously. So, our perception of time on Facebook is a combination of how frequently our friends post and how frequently we check in. For example, say I check in twice in two days – at 9am on day 1 and at 9pm on day 2. I know a good bit of time has passed (and the amount of time that has passed is clearly indicated next to friends’ updates), but I still read each status update in the context of the previous ones – especially if I pop onto a friend’s Timeline instead of my news feed.

With this type of infrequent checking, friends’ updates about their varying and changing emotions (which might be reasonably spread out over the course of a day or multiple days) appear to be an emotional roller coaster. If someone has several posts in a row about the same thing, even if they are spaced days apart, the person comes across as preoccupied with the topic. Somehow, I form a view of this individual that brings these little snippets together into one big amorphous NOW. If I were checking more frequently, however, perhaps I wouldn’t lump updates together in this way. I’d “feel” the passage of time and – more accurately – see that the ebb and flow of status updates are like islands in the stream of our lives rather than a direct sequence of events.

RiverIslands_2

Related to this first point, it occurred to me that status updates are not meant to be interpreted in the context of preceding status updates. Our brains are pattern recognition machines. So, if Facebook status updates follow one after the other, our brains may perceive a direct sequence of events. But, each status update is a snapshot of a moment, a thought, or a feeling. Intuitively, they are supposed to be stand-alone, not readily interpreted in the context of a previous update, even if they occur close together in actual time. Think how different this is from our face-to-face interactions, in which sequence of events matter. For example, imagine that you’re at work, and your co-worker tells you she is on pins and needles waiting to hear back about a medical test. When you see her a few hours later, she is joking and laughing. You assume she either (a) got some good news from the doctor, or (b) is trying to distract herself from the worry. You don’t think she’s just having a good time, out of context of what you learned about her earlier in the day. But this contextualization is not the way it works on Facebook. Linkages between updates are tenuous, connections malleable. We can lay out our stream of consciousness in a way that requires no consistency among updates. Maybe the temporal and logical requirements of the off-line world are suspended on social networking sites like Facebook. Maybe our brains need to catch up.

In Love with the Written Word: Reading in the Digital Age

I was interested to see this commentary by five college students about reading in the digital age, posted on Zócalo Public Square. One of the things that struck me the most was my own anticipation that I would be out of touch with how college students are engaging with the written word today; and I’m a college professor who should be in touch! But actually, I found that the diversity of their approaches mirrors the same diversity I see among my peers.

digital readingSeveral seemed to express a need for speed and fast consumption of many (relatively superficial) sources of information in the attempt to swim rather than sink in the ocean of information that needs sorting through every day. Others seemed to feel burdened by this glut of information and feel nostalgic for the simple and physically-satisfying pleasure of holding and reading a book – a virtual luxury in our fast-paced lives because it’s hard to multitask with a book.  Among all the writers, however, I sensed information fatigue combined with enthusiasm for the written word.

My take-home message is that, whatever the future holds, the digital age has put writing, reading, and text at the center of our lives. I think we are becoming more rather than less in love with reading. The question is, what will we be reading and will it be grammatically correct ;-)?

This is Your Brain on Technology?

There is a lot of polarized dialogue about the role of communication technologies in our lives – particularly mobile devices and social media: Technology is either ruining us or making our lives better than ever before. For the worried crowd, there is the notion that these technologies are doing something to our brain; something not so good – like making us stupid, numbing us, weakening social skills. It recalls the famous anti-drug campaign: This is your brain on drugs. In the original commercial, the slogan is accompanied by a shot of an egg sizzling on a skillet.

So, this is your brain on technology? Is technology frying our brain? Is this a good metaphor?

One fundamental problem with this metaphor is that these technologies are not doing anything to us; our brain is not “on” technology. Rather, these technologies are tools. When we use tools, we change the world and ourselves. So, in this sense, of course our brain is changed by technology. But our brain is also changed when we read a book or bake a pie. We should not accord something like a mobile device a privileged place beyond other tools.  Rather, we should try to remember that the effects of technology are a two-way street: we choose to use tools in a certain way, which in turn influences us.

We would also do well to remember that the brain is an amazing, seemingly alchemical combination of genetic predispositions, experiences, random events, and personal choices. That is, our brains are an almost incomprehensibly complex nature-nurture stew.  This brain of ours is also incredibly resilient and able to recover from massive physical insults. So, using a tool like a mobile device isn’t going to “fry” our brain. Repeated use of any tool will shape our brain, surely, but fry it? No.

So, “this is your brain on technology” doesn’t work for me.

The metaphor I like better is to compare our brains “on technology” to a muscle. This is a multi-faceted metaphor. On one hand, like a muscle, if you don’t use your brain to think and reason and remember, there is the chance that you’ll become less mentally agile and sharp. That is, if you start using technology at the expense of using these complex and well-honed skills, then those skills will wither and weaken. It’s “use it or lose it.”

On the other hand, we use tools all the time to extend our abilities and strength –whether it’s the equipment in a gym that allows us to repeatedly use muscles in order to strengthen them; or whether it’s a tool that takes our muscle power and amplifies it (think of a lever). Similarly, by helping us do things better, technology may serve to strengthen rather than weaken us.

It is an open question whether one or both of these views are true – and for what people and under what conditions. But I believe that we need to leave behind notions of technology “doing” things to our brains, and instead think about the complex ways in which our brains work with technology – whether that technology is a book or a mobile device.

 

Through a Glass, Darkly; But Then Face to Face: Sensitive Souls and Social Media

There is an idea out there that’s prevalent but which has little or no scientific support:  that people who use more social media are less sensitive, less empathic, and less emotionally attuned. My students Lee Dunn, Amy Medina and I wanted to put that assumption to the test (and reported these findings at the Society for Psychophysiological Research Annual Conference). We found the opposite: that people who prefer to use technology like social media to communicate with others are actually more emotionally sensitive and more empathic. These folks aren’t emotionally stunted or disconnected. If anything, they are more attuned to their emotions and to the emotions of others, and also might be more challenged by these emotions. They are “sensitive souls.”

This makes sense when you start to think about how hard face-to-face interactions can be.When we use social media, we may feel in control and safe compared to face to face. Technology affords a comfortable distance. It’s simply easier to tell someone you’re angry via email or IM, without having to deal with their reactions in person. So, if you’re an emotionally sensitive person, you might be drawn to social media. This is a judgment-free statement. Our findings don’t weigh in on whether this helps or hinders a person’s social and emotional skills. That is the critical next step in our research. Here is what we know so far:

How we put it to the test. While previous studies ask people to report on very basic aspects of their social media use – like how many hours a week they use social media sites – we did something new. We asked people how they prefer to communicate with others (and what they actually did over the past 6 months) when they need to express emotions like anger or excitement, ask for or give social support during emotionally tough times, and exchange information. For each question, answers could vary from 100% using technology (not including the phone) to 100% using face-to-face interactions. Many people showed a strong face-to-face preference, but just as many showed a strong tech preference.

Then, we asked people to tell us about their emotional lives – emotional highs and lows, empathy for others, personality, and satisfaction with the social support they receive from others. Finally, we recorded EEG (aka “brainwaves”) while they viewed emotional pictures. While EEG doesn’t give us the power to directly access people’s consciousness (Oh, Dennis Quaid, you really had us believing that you could EEG your way into our brains in the 1984 movie Dreamscape), EEG can measure the degree to which our brains are sensitive to different types of emotional information – pleasant, disgusting, erotic, dangerous, and cute, cuddly things. We showed participants everything from sex to kittens, and graves to gore.

The power of EEG, portrayed by the movie Dreamscape (1984). Dennis Quaid is probably NOT looking at pictures of kittens.

Findings. Data analyses are incomplete and are not yet published, so I’ll only discuss the broad strokes of our findings. As I stated at the top, those who prefer to communicate via social media and technology versus face-to-face interactions are sensitive souls: they report feeling more negative emotions (like anxiousness and sadness), are less extroverted, and are less satisfied with the social support they receive from others. On the other hand, they also report feeling more empathic towards others (for example, “I get a strong urge to help when I see someone who is upset” or “it upsets me to see someone being treated disrespectfully”).

Complementing this, EEG findings show that those with a social media/tech preference have stronger brain responses to pictures portraying mortality – graves, sick people, dying loved ones. That is, the brains of folks who prefer social media are more sensitive to pictures that are reminders of death and loss.

This is not about social media causing anything! The popular press often describes research about social media in inaccurate ways – saying that social media caused people to be a certain way (e.g., the idea of Facebook depression). This sounds sexy but is just wrong most of the time. Unless you’ve done experiments that show social media directly change something about people, or you’ve tracked how social media predicts changes in people over time, you cannot even begin to discuss causality.

So what can we discuss? What does this all mean? What it means is that our findings are not about causality, they are descriptive. These results help us to describe the social-emotional profile of people who prefer and use tech-mediated versus face-to-face social interactions – their personalities, goals, strengths, and vulnerabilities. Ultimately, this can help us understand the growing role of social media in our everyday routines, and why, for some, these tools can feel like life boats in the stormy seas of our lives. What remains unclear is whether these life boats are going to bring us to shore or whether we will be lost at sea (ok, this metaphor is getting a little much).

Where are we going with this? Importantly, we have no idea what the long-term costs or benefits of social media are for our sensitive souls. That is where I am really going with this research. I believe we need to track how a tech preference influences us from the cradle to the rocking chair: in our digital natives who are using these tools before they are out of diapers; in adults, who almost can’t remember a time when these tools didn’t exist; and in older adults, who may be discovering the immense world that opens up before them when they use technology to communicate with others.

Networked Individualism: Personal Agency Meets the Electronic Leash

I just started reading a book called “Networked: The New Social Operating System” by Lee Rainie  and Barry Wellman .  Many of you interested in social media have probably come across this book. The authors are leading authorities on the forefront of research that tracks how the internet and information technologies are being integrated into our lives.  They do large, survey-based studies and are clearly doing some of the best work of this type. They have significant resources behind them, including the Pew Research Center’s Internet & American Life Project, of which Rainie is the director. So, they are able to do this work extremely well and on a large scale.

Rainie is a journalist with a background in political science, and Wellman is a professor of sociology. So, for me as a psychologist with a clinical and neuroscience background, their methods and perspectives are quite different from mine.  This makes reading about their research, and the conclusions they draw from it, very interesting but I often have lingering questions about what their data mean.

One of the major ideas this book puts forward is that of networked individualism. Barry Wellman’s website was very helpful in teasing this concept apart.  According to the notion of networked individualism, there has been a three-fold information technology revolution that has influenced how we function as individuals in society. First, was the personal internet, second the growth of mobile access, and third the predominance of computer-mediated social networks. Networked individualism is the outcome. It refers to our growing tendency to operate as individuals in a network rather than as group members. This means that social activities are organized around the individual rather than the family or neighborhood.  Each person has enhanced agency because they operate their own social network. Thus, individuals rather than groups are at the hub of social life.

Network vs. Group. What does it mean to function in a network like this, rather than in a group? It means, according to Rainie and Wellman, the following: we are more fragmented, maneuvering easily among networks; person-to-person contact becomes more important than meeting in groups or in a specific location; and we make decisions independently rather than via the group, but draw on our networks to seek relevant information. In essence, we are individuals surfing a vast and complex social web, and we have multiple “neighborhoods” comprised of the people we can text, tweet, email, and tag. These neighborhoods change according to our needs.

Families. Families, according to them, are also functioning more as a network than a group. We see each other less often than several decades ago, but actually are in closer communication due to mobile communication technologies (i.e., we’re emailing , texting, and calling each other a lot). Some have referred to the constant awareness that we can have of others as an electronic leash (Wellman, on his website, compares this to the ball and chain of the past).

So, reading this, I can’t help but picture busy family members texting and emailing all day, not getting home until late, missing the family dinner, and removing themselves to their respective rooms to get on their devices. I’m being silly here, and I don’t actually think this happens a lot (although I know from observation that some people’s family lives are much like this).

At the same time, the notion of the electronic leash is one that seems to be to be a double-edged sword. On one hand, we’re more connected. I like this in many ways. For example, being able to text my husband any little thought that enters my head  is awesome (particularly when it’s of the “don’t forget to….” variety). He’s less excited about that aspect of the technology I imagine. On the other hand, my expanded social network takes a lot of time to keep up with, and I feel, often, that I have less time for my family and close friends unless I’m very strict and let a lot of messages/texts/tweets just go. I also find sometimes that I get in a mode of texting or emailing things to my close family and friends rather than talking. That’s fine for the sake of efficiency much of the time, but I can’t help but feel that I’m losing out on something more satisfying and on what I think of as the alchemy of face-to-face conversations – the unpredictable creativity and clarity that can happen when you just have an old-fashioned conversation.

Costs and Benefits. There is no doubt in my mind that we benefit from the ease of communication and the speed of information access. Also, personally, I love the ability to do more, communicate more, find out more, more, more!!! But the irony is that these tools can easily create just as many demands on our time as they relieve.

Rainie and Wellman seem, from the tenure of their writing, to be really excited about these changes. They seem to be saying (and I should be careful here, because I haven’t read the entire book yet) that these changes are already happening – we’re becoming more disconnected in terms of our membership in groups, communities, and even the family. However, social media technologies are helping us maintain connection in the face of this change, and may even foster more face-to-face time and social support. In a nutshell, we no longer live in villages, so why are we bemoaning the fact that we don’t know our neighbors anymore? Instead, through social media, we are empowered to have extremely large, rich, and diverse social networks that we can draw on to find the social support that we need.

Moreover, according to them, we are shifting to internet-based communities rather than in-person groups. Networked individuals tend to move around fluidly from one network to another rather than having a core community they are anchored in. People with whom you’re networked can change, turn over, and you probably have distinct networks for distinct purposes, rather than a deep connection with a few friends and relatives.  That is, you figure out where you can get what you need among your multiple social networks, and go to them. As a result, there is more uncertainty and less loyalty, but also more freedom and maneuverability. You can choose to have everyone know what you’re doing, or maintain privacy and selectively inform people what you’re doing. This kind of social control is less commonplace in traditional social networks, where you are more “under surveillance.”

Questions. Some of this sounds good to me, some not so good. But there are some questions I’m hoping to soon read that the book is asking. For example, is this shift towards networked individualism really inevitable? What exactly are the costs (it seems to me at this point that Rainie and Wellman focus more on the potential benefits versus costs)? Are social media just helping us to stay connected or are they actually a powerful force in moving us towards more networked individualism? For whom are these changes good, and for whom are they bad (i.e., are there network mavens and network Elmer Fudds)? What is the difference between size of network and the quality of the network? What about the burden placed on us to keep up with large, disparate social networks, which for many people may be largely comprised of acquaintances? Is there less time and energy left over for “quality” interactions and true intimacy? I hope to report back soon to say that Rainie and Wellman consider these challenging questions.

Cyborgs, Second Brains, and Techno-Lobotomies: Metablog #2

Last week, I had the pleasure of being Freshly Pressed on WordPress.com – that is, I was a featured blog on their home page. As a result, I got more traffic and more interesting comments from people in one day than I have since I began blogging. Thanks, WordPress editors!

I’ve been really excited and inspired by the exchanges I’ve had with others, including the ideas and themes we all started circling around. Most of the dialogue was about a post I made on technology, memory, and creativity. Here, I was interested in the idea that the more we remember the more creative we may be simply because we have a greater amount of “material” to work with. If this is the case, what does it mean that, for many of us, we are using extremely efficient and fast technologies to “outsource” our memory for all sorts of things? – from trivia, schedules, and dates, to important facts and things we want to learn. What does this mean in terms of our potential for creativity and learning, if anything? What are the pros and cons?

I was fascinated by the themes –maybe memes? – that emerged in my dialogue with other bloggers (or blog readers). I want to think through two of them here. I am TOTALLY sure that I wouldn’t have developed and thought through these issues to the same degree – that is, my creativity would have been reduced – without these digital exchanges. Thanks, All.

Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto
Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto

Are We Becoming Cyborgs? The consensus is that – according to most definitions – we already are. A cyborg (short for cybernetic organism) is a being that enhances its own abilities via technology. In fiction, cyborgs are portrayed as a synthesis of organic and synthetic parts. But this organic-synthetic integration is not necessary to meet the criteria for a cyborg. Anyone who uses technology to do something we humans already do, but in an enhanced way is a cyborg. If you can’t function as well once your devices are gone (like if you leave your smart phone at home), then you’re probably a cyborg.

A lot of people are interested in this concept. On an interesting blog called Cyborgology they write: “Today, the reality is that both the digital and the material constantly augment one another to create a social landscape ripe for new ideas. As Cyborgologists, we consider both the promise and the perils of living in constant contact with technology.”

Yet, on the whole, comments on my post last week were not made in this spirit of excitement and promise – rather, there was concern and worry that by augmenting our abilities via technology, we will become dependent because we will “use it or lose it.” That is, if we stop using our brain to do certain things, these abilities will atrophy (along with our brain?). I think that’s the unspoken (and spoken) hypothesis and feeling.

Indeed, when talking about the possibility of being a cyborg, the word scary was used by several people. I myself, almost automatically, have visions of borgs and daleks (look it up non sci-fi geeks ;-)) and devices for data streaming implanted in our brains. Those of us partial to future dystopias might be picturing eXistenZ – a Cronenberg film about a world in which we’re all “jacked in” to virtual worlds via our brain stem. The Wikipedia entry describes it best: “organic virtual reality game consoles known as “game pods” have replaced electronic ones. The pods are attached to “bio-ports”, outlets inserted at players’ spines, through umbilical cords.” Ok, yuck.

There was an article  last month on memory and the notion of the cyborg (thank you for bringing it to my attention, Wendy Edsall-Kerwin). Here, not only is the notion of technological augmentation brought up, but the notion of the crowdsourced self is discussed. This is, I believe, integral to the notion of being a cyborg in this particular moment in history. There is a great quote in the article, from Joseph Tranquillo, discussing the ways in which social networking sites allow others to contribute to the construction of a sense of self: “This is the crowd-sourced self. As the viewer changes, so does the collective construction of ‘you.’”

This suggests that, not only are we augmenting ourselves all the time via technology, but we are defining ourselves in powerful ways through the massively social nature of online life. This must have costs and benefits that we are beginning to grasp only dimly.

I don’t have a problem with being a cyborg. I love it in many ways (as long as I don’t get a bio-port inserted into my spine). But I also think that whether a technological augmentation is analog or digital, we need to PAY ATTENTION and not just ease into our new social landscape like a warm bath, like technophiles in love with the next cool thing. We need to think about what being a cyborg means, for good and for bad. We need to make sure we are using technology as a tool, and not being a tool of the next gadget and ad campaign.

The Second Brain. This got a lot of play in our dialogue on the blog. This is the obvious one we think about when we think about memory and technology – we’re using technological devices as a second brain in which to store memories to which we don’t want to devote our mental resources.

But this is far from a straightforward idea. For example, how do we sift through what is helpful for us to remember and what is helpful for us to outsource to storage devices? Is it just the trivia that should be outsourced? Should important things be outsourced if I don’t need to know them often? Say for example, I’m teaching a class and I find it hard to remember names. To actually remember these names, I have to make a real effort and use mnemonic devices. I’m probably worse at this now than I was 10 years ago because of the increased frequency with which I DON’T remember things in my brain now. So, given the effort it will take, and the fact that names can just be kept in a database, should I even BOTHER to remember my students’ names? Is it impersonal not to do so? Although these are relatively simple questions, they raise, for me, ethical issues about what being a professor means, what relating to students means, and how connected I am to them via my memory for something as simple as a name. Even this prosaic example illustrates how memory is far from morally neutral.

Another question raised was whether these changes could affect our evolution. Thomaslongmire.wordpress.com asked:  “If technology is changing the way that we think and store information, what do you think the potential could be? How could our minds and our memories work after the next evolutionary jump?”

I’ll take an idea from andylogan.wordpress.com as a starting point – he alluded to future training in which we learn how to allocate our memory, prioritize maybe. So, perhaps in the future, we’ll just become extremely efficient and focused “rememberers.” Perhaps we will also start to use our memory mainly for those types of things that can’t be easily encoded in digital format – things like emotionally-evocative memories. Facts are easy to outsource to digital devices, but the full, rich human experience is very difficult to encode in anything other than our marvelous human brains. So if we focus on these types of memories, maybe they will become incredibly sophisticated and important to us – even more so than now. Perhaps we’ll make a practice of remembering those special human moments with extreme detail and mindfulness, and we’ll become very, very good at it. Or, on the other hand, perhaps we would hire “Johnny Mnemonics” to do the remembering for us.

But a fundamental question here is whether there is something unique about this particular technological revolution. How is it different, say, than the advent of human writing over 5,000 years ago? The time-scale we’re talking about is not even a blink in evolutionary time. Have we even seen the evolutionary implications of the shift from an oral to a written memory culture?  I believe there is something unique about the nature of how we interact with technology – it is multi-modal, attention grabbing, and biologically rewarding (yes, it is!) in a way that writing just isn’t. But we have to push ourselves to articulate these differences, and seek to understand them, and not succumb to a doom and gloom forecast. A recent series of posts on the dailydoug  does a beautiful job of this.

Certainly, many can’t see a downside and instead emphasize the fantastic opportunities that a second brain affords us; or at least make the point, as robertsweetman.wordpress.com does, that “The internet/electronic memory ship is already sailing.”

So, where does that leave us? Ty Maxey wonders if all this is just leading to a technolobotomy – provocative term! –  but I wonder if instead we have an amazing opportunity to take these technological advances as a testing ground for us to figure out as a society what we value about those capacities that, for many of us, are what we think make us uniquely human.