The Medium is the Message: On Mindfulness and Digital Mirrors

I recently had the pleasure of doing a talk-back with Congressman Tim Ryan on the role of mindfulness – focusing your awareness on the present moment – in education, as part of the Rubin Museum’s Brainwave Festival in NYC. The film, called “Changing Minds at Concord High School,” followed an entire school as they took part in a mindfulness training program. This school is unique in that it is a transfer school, a last stop for many kids with a history of school failure and discipline problems. The twist here is that the students both filmed the experience and conducted a study – of their classmates! – comparing the effects of mindfulness training with that of a placebo. We also included a science curriculum on the neuroscience of mindfulness – how it can change our brains for the better. I was the lead scientist on this project, so the kids were my “research assistants.” The project was spearheaded and directed by the amazing Susan Finley and filmed by the equally inspiring Peter Barton (with the help of the students). Our outstanding scientific advisors were David Vago and Robert Roeser. There is a lot that was amazing about this project, these kids, and this film. I want to focus on just one aspect, which hinges on the phrase “The medium is the message.”

lake yoga

The medium is the message. This phrase was coined by Marshall McLuhan who put forward the idea that the “form of a medium embeds itself in the message.” That is, the medium in which we experience something influences how we perceive the take-home message. Using movies as an example, he argued that the way in which this medium presents time has transformed our view of time from something that is linear and sequential into something that reflects patterns of connection across people and places. I am obviously no film theorist, but I apply this notion to the idea that different media provide us with an array of tools that can help us create a narrative of ourselves and the world that is unique to that medium.

Film and self-identity. In the case of our film “Changing Minds at Concord High School,” I believe that one way that the medium was the message for our students was that film is able to portray individual identities as being truly flexible and changeable. I think that the teens at Concord High, many of whom have experienced tremendous challenges, stress, and obstacles in life, didn’t believe as a group that change for them was really possible. But what our program strove to do, using converging media – film, scientific readings, mind/body experiences of mindfulness – was to convince these young adults that they really could change their brains, change counterproductive habits of thinking, and find the tools to focus more and let negative feelings go. As we move on to Phase 2 of the project by refining and developing our program, we are asking the fundamental question: How can we best use these tools to teach teens to view themselves and the world differently, creating a narrative in which personal change is possible?

Our digital mirrors. I think these issues are especially important to consider now, in this era of social media and reality television in which we crave to see ourselves reflected back to ourselves. We can criticize this, and analyze this, but the fact of it borders on the irrefutable. We know that it’s easier than ever before to document our lives via pictures and videos on our mobile devices, and share them with our digital networks. And we love to do so. Social media, through which we share our images of ourselves and our lives, are an immeasurably huge and complex array of mirrors into which we can gaze at ourselves. There may be costs and benefits to this, but it simply is. The power of this, however, is that we now have a new set of tools to curate our beliefs about who we are – hopefully for the better. And perhaps we believe this evidence of who we are more strongly because it is concrete, it is documented, it receives “likes” and is seen by others and thus is real. I’m liked therefore I am.

This digital infrastructure also provides a profound opportunity for those trying to support growth and positive change in youth. If we help youth document the possibility of change – like we did in “Changing Minds at Concord High School”- they may start to believe it applies to their own lives. This is particularly important for those of us who aren’t used to feeling that the world is full of possibilities. In this way, social networking may be a medium that gives the message that change is possible and that our limitations are as fluid as the flow of information.

Mission Impossible?: Fitting the Techno-Social Landscape of Our Lives into Neat Little Boxes

What can science really tell us about the complex roles of social media, technology, and computer-mediated communication in our social lives? It’s a question I’ve been increasingly asking myself.  As a scientist, my job is to deconstruct very complex phenomena into understandable components, put things in neat, little, over-simplified boxes so that we can actually begin to understand something in systematic, replicable ways. Don’t get me wrong. I love science and think the tools of science are still the best we have available to us. But there are also limitations to these tools.

In particular, I think we haven’t even begun to wrap our heads around how all the technologies we use to augment our social lives work together to create a unique social experience. For example, the social context of texting is very different from that of Facebook which is very different from the social context of blogging, etc,… Simply studying the number of hours a given person uses social media or some type of communication technology is not going to tell you a lot about that person’s life. A given person may be on Facebook 12 hours a week, avoid texting and talking on the phone,  listen to all their music on Spotify, troll YouTube videos  5 hours a week, video chat 12 times a week, and the list goes on. It seems to me that the experience of all these media, TOGETHER, makes up our full technosocial landscape; the gestalt of our lives.

So how do we start to understand each person’s unique profile of social technology use? One difference that could matter is that some of us are using technology that facilitates direct social connection and social networking (e.g., Facebook) whereas others are using technology that are more like digital analogs to the phone (e.g., texting). It probably also matters whether these technology augment or take the place of face-to-face interactions. There is an interesting post on the dailydoug blog that includes discussion of these kinds of differences.

I’m also starting to think it’s not so much the explicit social interactions we have via technology (e.g., commenting on someone’s status update on Facebook) but rather, it’s the degree to which we use technology to transport ourselves into a connected state of consciousness.  I actually think this applies to any technology – we probably all have used books, music, TV and other things to transport our consciousness and feel more connected to something bigger than ourselves. But in the case of mobile technology and social media, the nature of the game has changed in a fundamental way – communication is completely portable, deeply social, extremely fast, and set up in such a way that we feel “disconnected” if we don’t constantly check our devices.

So, how do we unpack the complex profiles of our technology use and the key role these technologies play in our sense of connection with others? What are the patterns? Are there patterns that are problematic or helpful in terms of making us all happier (and isn’t that the only thing that really matters?)? If a pattern is problematic, can we tweak it so that it becomes healthy? Are there optimal patterns for certain types of people? How can we take into account that while two people might both use Facebook 3 hours a day, they might respond to this experience completely differently (e.g., some people feel more depressed  after using Facebook because of all the social comparisons that make us feel lacking; many others just feel happy and more connected)? Are there certain combinations of technology use and face-to-face time that allow people to feel connected in a way that enriches without the burden of too many forms of communication to keep up with? I think technology burden is a deepening issue, and that many of us are starting to figure out the costs and benefits of our digitally-connected lives.

Why do I think this is so hard for Science to examine? Because it is very difficult to scientifically study non-linear phenomena – those processes that are not in the format of A influences B which in turn influences C. Instead, when you have individuals, each with a unique profile of technology use that makes up our social lives, along with all the subjective experiences and feelings that go along with it, you have a really interesting multi-level dynamic system. Sometimes when you deconstruct a system to understand its separate parts, you lose the whole. You know, the old, “the whole is greater than the sum of its parts.”

In answer to my question, I don’t think this is a mission impossible. But I think it’s a mission that is incredibly rich and challenging. I’m up for trying and hope that I and others can find a way to honor these complexities by finding scientifically-valid “boxes” and approaches which are good enough to hold them.

With Great Power Comes Great Responsibility: Are Social Media Anti-Social?

This past Wednesday, I had the pleasure of being a panel member for a debate at the UN on social media. It launched the debate series “Point/Counter-point” organized by the United Nations Academic Impact team.  You can see the debate here.

We debated on the theme “social media are anti-social.” I was assigned to the team arguing in support of this point. I was unhappy with being asked to take this side – because I don’t agree with it! – but I was willing to do so with the understanding that I would argue that the very question of whether social media are anti-social is a faulty one. That is, like most technology, social media are neither good nor bad in and of themselves because the impact of social media depends on how they are used. Moreover, from a scientific standpoint, we know almost nothing about whether social media are actually making us more “anti-social” – less socially connected and less socially skilled.

After clearly stating this, however, my strategy was to highlight ways in which social media COULD be antisocial – emphasizing that the research to test these possibilities remains to be done. Perhaps that was one reason why we (my team mate BJ Mendelson and I) lost so spectacularly. At the same time, it was clear that the audience (whose votes determined the winning side) had already made up their minds before the debate even began. This was unsurprising because social media, as this era’s technological bugaboo, are absurdly polarizing. It’s either the scapegoat for all that is wrong, or the best hope for a utopian future.  And of course, the truth is always somewhere in between.

Coincidentally, this very debate had just been played out in relation to an inflammatory Newsweek article last week called “Is the Web Driving Us Mad?” A flurry of responses emerged, including an online Time Healthland article calling into serious question the Newsweek article’s review of evidence that the internet “makes” people crazy. Essentially, the Newsweek article is accused of being sensationalistic rather than doing what responsible journalism is supposed to do: (a) impartially seeking out and weighing the evidence that exists with a careful eye to the quality and direct implications of the science being cited, and (b) avoiding quoting scientific findings out of context.

I believe, however, that there is so much polarized debate because the research we need to weigh in on these issues has not yet been conducted. And that was my main point in the debate. We know almost nothing about the cause and effect relationship between social media or the internet and mental health: Are these technologies making us crazy, depressed, anxious, etc,…, or are people who are already troubled in offline life troubled no matter what the context? How do we measure anti-social, or crazy, or any other outcome that reflects the well-being of an individual? The plethora of unanswered questions makes for polarizing journalism.

One interesting possibility that the Newsweek article brought up and which I considered in the debate was the idea that social media may influence us in ways that are more powerful than other types of technology because they tap into something that is fundamentally rewarding to humans (and most mammals!): the need to be socially connected with others.

I made the point in the debate that, “Science is finding that social media are so rewarding, so motivating, that they essentially “highjack” our brain’s reward centers – the same brain areas that underlie drug addiction-  so that you see what all of us can attest to: people have difficulty disengaging from social media. They feel the need to constantly check their device for the next text, tweet, status update, or email. They feel obsessed. The documented existence of Facebook addiction attests to this. How many of us walk down the street, or eat dinner in a restaurant with our devices clutched in our hand or lying on the table right next to us like a security blanket. I know I do more often than I’d like.”

Indeed, we don’t walk down the street reading a book or watching TV. These technologies can be consuming, but the nature of social media – portable, brief, deeply social – creates a completely different set of temptations and rewards. Textbook theories of behavioral learning and reinforcement tell us that the way rewards are integrated into social media is a recipe for keeping us roped in. For example, if your goal is to,  say, make a rat in a cage press a bar as frequently as possible you should do the following: every once in a while, in a completely unpredictable way, give a reward when the bar is pushed. In contrast, if you give rewards every time they push the bar, they’ll become sated and push less. If you reward in a predictable way, they’ll press the bar just enough to get the reward and no more – because they know how many times they need to press the bar before the reward comes.

Now think about how we use our devices. We check our devices frequently (analogous to pressing the bar) because we’re never sure when an important message, really good piece of news or fascinating factoid will pop up (analogous to the unpredictable reward). So, we find ourselves with device in hand “pressing the bar” over and over again, all day long. The whole economy of social media (i.e., the way the creators of these platforms make their money) is hugely dependent on this very fact.

Now I have to stop and give a MAJOR caveat: This idea may be compelling, sounds like it could be right, but, from my reading of the literature, there is very little direct evidence that this is the case. All we know is that neurologically, aspects of social media and internet use are rewarding, calming, and pleasurable. It’s a far cry from “highjacking our brain,” a phrase I used in the debate for the sake of argument and hyperbole. At the same time, a growing number of people think this is a viable hypothesis, and one that we must put to the test.

By the end of the debate, I think we were all in agreement that when forced to pick a side, we could argue it. But really, we all felt the same thing: Whether social media are anti-social simply depends. It depends on who is using it, how they are using it, and why they are using it. And we just don’t have the scientific knowledge yet to understand these who’s, how’s, and whys.

I concluded my opening statement in the debate by saying, “Until we as a society spend the time, energy and resources to scientifically test how we are changed [by social media], we should proceed with caution and with the possibility in mind that social media could make us more anti-social.”

But BJ Mendelson may have summed it up best when he made a good old-fashioned fan boy reference: with great power comes great responsibility. We need to take the responsibility to look at, question, and try to understand the role of social media in our lives and in society.

Cyborgs, Second Brains, and Techno-Lobotomies: Metablog #2

Last week, I had the pleasure of being Freshly Pressed on WordPress.com – that is, I was a featured blog on their home page. As a result, I got more traffic and more interesting comments from people in one day than I have since I began blogging. Thanks, WordPress editors!

I’ve been really excited and inspired by the exchanges I’ve had with others, including the ideas and themes we all started circling around. Most of the dialogue was about a post I made on technology, memory, and creativity. Here, I was interested in the idea that the more we remember the more creative we may be simply because we have a greater amount of “material” to work with. If this is the case, what does it mean that, for many of us, we are using extremely efficient and fast technologies to “outsource” our memory for all sorts of things? – from trivia, schedules, and dates, to important facts and things we want to learn. What does this mean in terms of our potential for creativity and learning, if anything? What are the pros and cons?

I was fascinated by the themes –maybe memes? – that emerged in my dialogue with other bloggers (or blog readers). I want to think through two of them here. I am TOTALLY sure that I wouldn’t have developed and thought through these issues to the same degree – that is, my creativity would have been reduced – without these digital exchanges. Thanks, All.

Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto
Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto

Are We Becoming Cyborgs? The consensus is that – according to most definitions – we already are. A cyborg (short for cybernetic organism) is a being that enhances its own abilities via technology. In fiction, cyborgs are portrayed as a synthesis of organic and synthetic parts. But this organic-synthetic integration is not necessary to meet the criteria for a cyborg. Anyone who uses technology to do something we humans already do, but in an enhanced way is a cyborg. If you can’t function as well once your devices are gone (like if you leave your smart phone at home), then you’re probably a cyborg.

A lot of people are interested in this concept. On an interesting blog called Cyborgology they write: “Today, the reality is that both the digital and the material constantly augment one another to create a social landscape ripe for new ideas. As Cyborgologists, we consider both the promise and the perils of living in constant contact with technology.”

Yet, on the whole, comments on my post last week were not made in this spirit of excitement and promise – rather, there was concern and worry that by augmenting our abilities via technology, we will become dependent because we will “use it or lose it.” That is, if we stop using our brain to do certain things, these abilities will atrophy (along with our brain?). I think that’s the unspoken (and spoken) hypothesis and feeling.

Indeed, when talking about the possibility of being a cyborg, the word scary was used by several people. I myself, almost automatically, have visions of borgs and daleks (look it up non sci-fi geeks ;-)) and devices for data streaming implanted in our brains. Those of us partial to future dystopias might be picturing eXistenZ – a Cronenberg film about a world in which we’re all “jacked in” to virtual worlds via our brain stem. The Wikipedia entry describes it best: “organic virtual reality game consoles known as “game pods” have replaced electronic ones. The pods are attached to “bio-ports”, outlets inserted at players’ spines, through umbilical cords.” Ok, yuck.

There was an article  last month on memory and the notion of the cyborg (thank you for bringing it to my attention, Wendy Edsall-Kerwin). Here, not only is the notion of technological augmentation brought up, but the notion of the crowdsourced self is discussed. This is, I believe, integral to the notion of being a cyborg in this particular moment in history. There is a great quote in the article, from Joseph Tranquillo, discussing the ways in which social networking sites allow others to contribute to the construction of a sense of self: “This is the crowd-sourced self. As the viewer changes, so does the collective construction of ‘you.’”

This suggests that, not only are we augmenting ourselves all the time via technology, but we are defining ourselves in powerful ways through the massively social nature of online life. This must have costs and benefits that we are beginning to grasp only dimly.

I don’t have a problem with being a cyborg. I love it in many ways (as long as I don’t get a bio-port inserted into my spine). But I also think that whether a technological augmentation is analog or digital, we need to PAY ATTENTION and not just ease into our new social landscape like a warm bath, like technophiles in love with the next cool thing. We need to think about what being a cyborg means, for good and for bad. We need to make sure we are using technology as a tool, and not being a tool of the next gadget and ad campaign.

The Second Brain. This got a lot of play in our dialogue on the blog. This is the obvious one we think about when we think about memory and technology – we’re using technological devices as a second brain in which to store memories to which we don’t want to devote our mental resources.

But this is far from a straightforward idea. For example, how do we sift through what is helpful for us to remember and what is helpful for us to outsource to storage devices? Is it just the trivia that should be outsourced? Should important things be outsourced if I don’t need to know them often? Say for example, I’m teaching a class and I find it hard to remember names. To actually remember these names, I have to make a real effort and use mnemonic devices. I’m probably worse at this now than I was 10 years ago because of the increased frequency with which I DON’T remember things in my brain now. So, given the effort it will take, and the fact that names can just be kept in a database, should I even BOTHER to remember my students’ names? Is it impersonal not to do so? Although these are relatively simple questions, they raise, for me, ethical issues about what being a professor means, what relating to students means, and how connected I am to them via my memory for something as simple as a name. Even this prosaic example illustrates how memory is far from morally neutral.

Another question raised was whether these changes could affect our evolution. Thomaslongmire.wordpress.com asked:  “If technology is changing the way that we think and store information, what do you think the potential could be? How could our minds and our memories work after the next evolutionary jump?”

I’ll take an idea from andylogan.wordpress.com as a starting point – he alluded to future training in which we learn how to allocate our memory, prioritize maybe. So, perhaps in the future, we’ll just become extremely efficient and focused “rememberers.” Perhaps we will also start to use our memory mainly for those types of things that can’t be easily encoded in digital format – things like emotionally-evocative memories. Facts are easy to outsource to digital devices, but the full, rich human experience is very difficult to encode in anything other than our marvelous human brains. So if we focus on these types of memories, maybe they will become incredibly sophisticated and important to us – even more so than now. Perhaps we’ll make a practice of remembering those special human moments with extreme detail and mindfulness, and we’ll become very, very good at it. Or, on the other hand, perhaps we would hire “Johnny Mnemonics” to do the remembering for us.

But a fundamental question here is whether there is something unique about this particular technological revolution. How is it different, say, than the advent of human writing over 5,000 years ago? The time-scale we’re talking about is not even a blink in evolutionary time. Have we even seen the evolutionary implications of the shift from an oral to a written memory culture?  I believe there is something unique about the nature of how we interact with technology – it is multi-modal, attention grabbing, and biologically rewarding (yes, it is!) in a way that writing just isn’t. But we have to push ourselves to articulate these differences, and seek to understand them, and not succumb to a doom and gloom forecast. A recent series of posts on the dailydoug  does a beautiful job of this.

Certainly, many can’t see a downside and instead emphasize the fantastic opportunities that a second brain affords us; or at least make the point, as robertsweetman.wordpress.com does, that “The internet/electronic memory ship is already sailing.”

So, where does that leave us? Ty Maxey wonders if all this is just leading to a technolobotomy – provocative term! –  but I wonder if instead we have an amazing opportunity to take these technological advances as a testing ground for us to figure out as a society what we value about those capacities that, for many of us, are what we think make us uniquely human.

Pattern Recognition: How Technology Might Make Us Smarter

There is a lot of talk about how technology might be making us stupid. The examples are legion, and possibilities endless: we can’t spell anymore; we can’t remember anything anymore because we have a big, giant, virtual brain called the internet; we have flea-like attention spans; etc, etc, etc,..

To over-generalize like this is certainly giving technology a bum rap. And of course, many argue the opposite – that using different technologies improves key abilities  like working memory and eye-hand coordination. I think that there is always the risk of losing skills (aka becoming more stupid) if use shortcuts all the time and look at things superficially rather than using our brains to understand something at a deeper level. But there are many opportunities to gain new abilities via technology.

One ability that I think might be enhanced by the use of internet-based platforms, like social media, web browsers, and online shopping, is pattern recognition. From the point of view of psychology, pattern recognition refers to perceiving that a set of separate items make up a greater whole – such as faces, objects, words, melodies, etc. This process often happens automatically and spontaneously, and seems to be an innate ability of most animals. Certainly, the tendency to see patterns is fundamentally human – even patterns that don’t exist, such as the Man in the Moon.

How would using the internet help strengthen our pattern recognition abilities? To use the internet, we have to become skilled at skimming through large quantities of information rapidly, instantly judging whether we’ve found the information, website, or person that we’re looking for. Also, we have to rapidly shift from site to site. To process all that information slowly and serially would keep us busy all day. We have to put it together, see the patterns, and glean the information that we need. Children are frighteningly good at this. They have no difficulty sorting through complex arrays of information and graphics.  It feels like they read the patterns of the computer interfaces like native speakers. It’s not for nothing that we call children growing up today digital natives.

One of my favorite books of the last decade, Pattern Recognition, by the great technovisionary William Gibson, plays with the idea of what pattern recognition means to us today. Set in the present (rather than some future dystopia, which is more usual for him), the protagonist, Cayce (pronounced case not cas-ee) has an extreme psychological sensitivity to corporate logos, and has what amounts to an allergic reaction to successful advertising. So, companies hire her to judge the effectiveness of their proposed corporate logos and advertising strategies. Her ability is to effortlessly identify the je ne sais quoi – that special pattern – that makes a logo powerful and effective. I think that Gibson is thinking about our era as one in which highly skilled pattern recognition defines what we do and who we are becoming.

So, the question arises: Does that mean I want to sit my 3-year-old in front of a device for hours a day to help him build these abilities? No. But perhaps focusing on the skills he can build will help me think through how to structure his use of things like the iPad more effectively – such as what apps to choose for him, how to dovetail what he’s learning on the device with what he’s doing in the world (e.g., building blocks all the time, learning about letters and numbers), and how to help him see the patterns in what he’s doing.

Of course it is way too simplistic to demonize any technology by saying it will make us stupid. It’s all about the costs and benefits of how we use the technology. That’s why the research community needs to step up to the plate and try to understand how all these aspects of our children’s technological lives are changing them (or not) – what technology offers us, and what we in turn bring to the table in that equation.  We know shockingly little. As parents, we can either cut our children off from technology all together, or try to use our best judgment and make our children’s interactions with technology useful and powerful.  As adults, we can do the same – clearly, we need to think carefully about how we want to integrate these devices into our lives.

Now, sit down and look through your twitter feed or Facebook newsfeed, and see all the information you have to sort through. Tons of it! Reams – just in a given day…. And feel how your pattern recognition abilities are growing!


 

Top 7 Ways Blogging Changes My Consciousness: Meta-Blog 1

As a new blogger and as a research psychologist, I’ve been very interested in how blogging has actually changed the way I think about things, how I feel, and the choices I make. So, I decided to start tracking my experience as a user of this particular type of social media by blogging about blogging – or meta-blogging. I’m my own little case study. Here’s my Top 7:

1. I’ve been shower blogging. That is, I rehearse blogs in the shower. When I have what I think is a good idea, I stand there and practice (out loud usually) how I would blog about it. Now, one issue with this is that I don’t have pen and paper in there for obvious reasons, so I forget half of it. Eighty percent of it, really. Even when it sounds SO brilliant. Then there’s the issue of shower logic. It’s like when you dream something and it seems so perfectly logical and genius in the dream, but then you wake up and realize it was gobbledegook. Shower blogging is kind of like this for me. And there is risk attached, too: if you get really carried away, you might forget to wash some parts of your body, so that you find after a few days that your right elbow or whatever is completely filthy.

2. I have a busier mind. Shower blogging is a symptom of this. Essentially, I find myself spending much more of my mental time zooming from one thought to another, time having an internal conversation with myself, and time skimming various streams and feeds (and here I mean, Facebook and Twitter – funny how these words evoke nourishment and natural, bucolic settings….maybe a picnic by a stream?). See, this is what I’m talking about. My mind zig-zags with all its loose associations. And I cultivate that to a degree, because that’s how good ideas emerge. I think this is fine and fun in many ways, but I’m doing it A LOT more than usual, and it tires me out a bit. And I worry that I’m less present for my kids and husband and friends.

3. I’m thinking more about being mindful. An interesting side benefit of having a busier mind is that I have a greater desire now to become a more mindful person – having more stillness in my life, and spending more time in the moment. I’ve started to make meditation a deeper habit in my life again, and I’m trying very hard to keep off all devices when I’m with my kids. I don’t want to be that mom who can only give 41.5% of her attention to her kids while she multi-tasks five other things. Don’t get me wrong, moms have to multi-task – Jeez, do we ever. But my goal is that when I’m with my children and spending time, they really feel SEEN by me, really engaged with and listened to and – hopefully – understood.

4. I keep better track of interesting ideas that I wouldn’t have otherwise. I really like this part of it. Just think how many ideas we just let go because we’re in the middle of something, or walking around, or have in the middle of a conversation and just forget. I try harder to hold onto some of these BECAUSE I think they might make an interesting topic for blogging. I’ll see if this yields anything, but already, I feel my intellectual life is enriched. As a scientist, I do this for my science ideas, but let other stuff go. I think this could be a mistake, and perhaps the ideas in one domain (e.g., science) will be enriched and in turn enrich my blogging ideas.

5. I write with an imaginary audience in mind. I can almost see their faces. Lit by the glow of their computer screens or devices. They are avidly soaking up my every word. Right….. So, essentially, I am becoming more self-centered. Is this any different from writing a letter? Maybe there is more pressure when the imaginary audience is a group or crowd? I think at this historical point in time, as a society we have a deep desire to be seen, to have our 15 minutes OR MORE, to be the next viral video or whatever, to be famous. Is blogging a way to satisfy this urge to some degree?

6. I feel cleverer. Emphasis on the “feel.” It’s pretty clear that I’m not cleverer. Although the process of putting ideas down on paper makes me feel like there is more going on up there in the old brain. I do a lot of scientific writing, and strangely enough, this does not make me feel particularly clever. Perhaps because it’s just what I do? Perhaps because with blogging, I’m using a part of my brain that has been rusty. Whatever the case, this feeling of being clever is very rewarding and I suspect it is part of my motivation to blog.

7. I feel more connected. I really do. And this is an interesting psychological phenomenon, because at this point in my blogging career, the nature of this connection is very tenuous. It’s literally in my head – an imagined web of connection, of shared ideas, of simpatico. I think for bloggers who have built a large community, this feeling is much more real. But, one has to wonder where this is all going. Online connections (that stay online) can be very emotionally satisfying, but they are more superficial and are not what the current psychology tells us is a “true” connection. They are quite a bit easier than other types of connection (i.e., face-to-face, long-term relationships and friendships), so some worry that we are withdrawing into these easier online relationships at the expense of our “real” relationships. I really don’t know if that’s the case. I don’t see it in my own life (although my husband claims I drift onto Twitter in the middle of a conversation. Oops). But this is something I’ll be watching!