There is a lot of polarized dialogue about the role of communication technologies in our lives – particularly mobile devices and social media: Technology is either ruining us or making our lives better than ever before. For the worried crowd, there is the notion that these technologies are doing something to our brain; something not so good – like making us stupid, numbing us, weakening social skills. It recalls the famous anti-drug campaign: This is your brain on drugs. In the original commercial, the slogan is accompanied by a shot of an egg sizzling on a skillet.
So, this is your brain on technology? Is technology frying our brain? Is this a good metaphor?
One fundamental problem with this metaphor is that these technologies are not doing anything to us; our brain is not “on” technology. Rather, these technologies are tools. When we use tools, we change the world and ourselves. So, in this sense, of course our brain is changed by technology. But our brain is also changed when we read a book or bake a pie. We should not accord something like a mobile device a privileged place beyond other tools. Rather, we should try to remember that the effects of technology are a two-way street: we choose to use tools in a certain way, which in turn influences us.
We would also do well to remember that the brain is an amazing, seemingly alchemical combination of genetic predispositions, experiences, random events, and personal choices. That is, our brains are an almost incomprehensibly complex nature-nurture stew. This brain of ours is also incredibly resilient and able to recover from massive physical insults. So, using a tool like a mobile device isn’t going to “fry” our brain. Repeated use of any tool will shape our brain, surely, but fry it? No.
So, “this is your brain on technology” doesn’t work for me.
The metaphor I like better is to compare our brains “on technology” to a muscle. This is a multi-faceted metaphor. On one hand, like a muscle, if you don’t use your brain to think and reason and remember, there is the chance that you’ll become less mentally agile and sharp. That is, if you start using technology at the expense of using these complex and well-honed skills, then those skills will wither and weaken. It’s “use it or lose it.”
On the other hand, we use tools all the time to extend our abilities and strength –whether it’s the equipment in a gym that allows us to repeatedly use muscles in order to strengthen them; or whether it’s a tool that takes our muscle power and amplifies it (think of a lever). Similarly, by helping us do things better, technology may serve to strengthen rather than weaken us.
It is an open question whether one or both of these views are true – and for what people and under what conditions. But I believe that we need to leave behind notions of technology “doing” things to our brains, and instead think about the complex ways in which our brains work with technology – whether that technology is a book or a mobile device.
There is an idea out there that’s prevalent but which has little or no scientific support: that people who use more social media are less sensitive, less empathic, and less emotionally attuned. My students Lee Dunn, Amy Medina and I wanted to put that assumption to the test (and reported these findings at the Society for Psychophysiological Research Annual Conference). We found the opposite: that people who prefer to use technology like social media to communicate with others are actually more emotionally sensitive and more empathic. These folks aren’t emotionally stunted or disconnected. If anything, they are more attuned to their emotions and to the emotions of others, and also might be more challenged by these emotions. They are “sensitive souls.”
This makes sense when you start to think about how hard face-to-face interactions can be.When we use social media, we may feel in control and safe compared to face to face. Technology affords a comfortable distance. It’s simply easier to tell someone you’re angry via email or IM, without having to deal with their reactions in person. So, if you’re an emotionally sensitive person, you might be drawn to social media. This is a judgment-free statement. Our findings don’t weigh in on whether this helps or hinders a person’s social and emotional skills. That is the critical next step in our research. Here is what we know so far:
How we put it to the test. While previous studies ask people to report on very basic aspects of their social media use – like how many hours a week they use social media sites – we did something new. We asked people how they prefer to communicate with others (and what they actually did over the past 6 months) when they need to express emotions like anger or excitement, ask for or give social support during emotionally tough times, and exchange information. For each question, answers could vary from 100% using technology (not including the phone) to 100% using face-to-face interactions. Many people showed a strong face-to-face preference, but just as many showed a strong tech preference.
Then, we asked people to tell us about their emotional lives – emotional highs and lows, empathy for others, personality, and satisfaction with the social support they receive from others. Finally, we recorded EEG (aka “brainwaves”) while they viewed emotional pictures. While EEG doesn’t give us the power to directly access people’s consciousness (Oh, Dennis Quaid, you really had us believing that you could EEG your way into our brains in the 1984 movie Dreamscape), EEG can measure the degree to which our brains are sensitive to different types of emotional information – pleasant, disgusting, erotic, dangerous, and cute, cuddly things. We showed participants everything from sex to kittens, and graves to gore.
Findings. Data analyses are incomplete and are not yet published, so I’ll only discuss the broad strokes of our findings. As I stated at the top, those who prefer to communicate via social media and technology versus face-to-face interactions are sensitive souls: they report feeling more negative emotions (like anxiousness and sadness), are less extroverted, and are less satisfied with the social support they receive from others. On the other hand, they also report feeling more empathic towards others (for example, “I get a strong urge to help when I see someone who is upset” or “it upsets me to see someone being treated disrespectfully”).
Complementing this, EEG findings show that those with a social media/tech preference have stronger brain responses to pictures portraying mortality – graves, sick people, dying loved ones. That is, the brains of folks who prefer social media are more sensitive to pictures that are reminders of death and loss.
This is not about social media causing anything! The popular press often describes research about social media in inaccurate ways – saying that social media caused people to be a certain way (e.g., the idea of Facebook depression). This sounds sexy but is just wrong most of the time. Unless you’ve done experiments that show social media directly change something about people, or you’ve tracked how social media predicts changes in people over time, you cannot even begin to discuss causality.
So what can we discuss? What does this all mean? What it means is that our findings are not about causality, they are descriptive. These results help us to describe the social-emotional profile of people who prefer and use tech-mediated versus face-to-face social interactions – their personalities, goals, strengths, and vulnerabilities. Ultimately, this can help us understand the growing role of social media in our everyday routines, and why, for some, these tools can feel like life boats in the stormy seas of our lives. What remains unclear is whether these life boats are going to bring us to shore or whether we will be lost at sea (ok, this metaphor is getting a little much).
Where are we going with this? Importantly, we have no idea what the long-term costs or benefits of social media are for our sensitive souls. That is where I am really going with this research. I believe we need to track how a tech preference influences us from the cradle to the rocking chair: in our digital natives who are using these tools before they are out of diapers; in adults, who almost can’t remember a time when these tools didn’t exist; and in older adults, who may be discovering the immense world that opens up before them when they use technology to communicate with others.
What can science really tell us about the complex roles of social media, technology, and computer-mediated communication in our social lives? It’s a question I’ve been increasingly asking myself. As a scientist, my job is to deconstruct very complex phenomena into understandable components, put things in neat, little, over-simplified boxes so that we can actually begin to understand something in systematic, replicable ways. Don’t get me wrong. I love science and think the tools of science are still the best we have available to us. But there are also limitations to these tools.
In particular, I think we haven’t even begun to wrap our heads around how all the technologies we use to augment our social lives work together to create a unique social experience. For example, the social context of texting is very different from that of Facebook which is very different from the social context of blogging, etc,… Simply studying the number of hours a given person uses social media or some type of communication technology is not going to tell you a lot about that person’s life. A given person may be on Facebook 12 hours a week, avoid texting and talking on the phone, listen to all their music on Spotify, troll YouTube videos 5 hours a week, video chat 12 times a week, and the list goes on. It seems to me that the experience of all these media, TOGETHER, makes up our full techno–social landscape; the gestalt of our lives.
So how do we start to understand each person’s unique profile of social technology use? One difference that could matter is that some of us are using technology that facilitates direct social connection and social networking (e.g., Facebook) whereas others are using technology that are more like digital analogs to the phone (e.g., texting). It probably also matters whether these technology augment or take the place of face-to-face interactions. There is an interesting post on the dailydoug blog that includes discussion of these kinds of differences.
I’m also starting to think it’s not so much the explicit social interactions we have via technology (e.g., commenting on someone’s status update on Facebook) but rather, it’s the degree to which we use technology to transport ourselves into a connected state of consciousness. I actually think this applies to any technology – we probably all have used books, music, TV and other things to transport our consciousness and feel more connected to something bigger than ourselves. But in the case of mobile technology and social media, the nature of the game has changed in a fundamental way – communication is completely portable, deeply social, extremely fast, and set up in such a way that we feel “disconnected” if we don’t constantly check our devices.
So, how do we unpack the complex profiles of our technology use and the key role these technologies play in our sense of connection with others? What are the patterns? Are there patterns that are problematic or helpful in terms of making us all happier (and isn’t that the only thing that really matters?)? If a pattern is problematic, can we tweak it so that it becomes healthy? Are there optimal patterns for certain types of people? How can we take into account that while two people might both use Facebook 3 hours a day, they might respond to this experience completely differently (e.g., some people feel more depressed after using Facebook because of all the social comparisons that make us feel lacking; many others just feel happy and more connected)? Are there certain combinations of technology use and face-to-face time that allow people to feel connected in a way that enriches without the burden of too many forms of communication to keep up with? I think technology burden is a deepening issue, and that many of us are starting to figure out the costs and benefits of our digitally-connected lives.
Why do I think this is so hard for Science to examine? Because it is very difficult to scientifically study non-linear phenomena – those processes that are not in the format of A influences B which in turn influences C. Instead, when you have individuals, each with a unique profile of technology use that makes up our social lives, along with all the subjective experiences and feelings that go along with it, you have a really interesting multi-level dynamic system. Sometimes when you deconstruct a system to understand its separate parts, you lose the whole. You know, the old, “the whole is greater than the sum of its parts.”
In answer to my question, I don’t think this is a mission impossible. But I think it’s a mission that is incredibly rich and challenging. I’m up for trying and hope that I and others can find a way to honor these complexities by finding scientifically-valid “boxes” and approaches which are good enough to hold them.
I just started reading a book called “Networked: The New Social Operating System” by Lee Rainie and Barry Wellman . Many of you interested in social media have probably come across this book. The authors are leading authorities on the forefront of research that tracks how the internet and information technologies are being integrated into our lives. They do large, survey-based studies and are clearly doing some of the best work of this type. They have significant resources behind them, including the Pew Research Center’s Internet & American Life Project, of which Rainie is the director. So, they are able to do this work extremely well and on a large scale.
Rainie is a journalist with a background in political science, and Wellman is a professor of sociology. So, for me as a psychologist with a clinical and neuroscience background, their methods and perspectives are quite different from mine. This makes reading about their research, and the conclusions they draw from it, very interesting but I often have lingering questions about what their data mean.
One of the major ideas this book puts forward is that of networked individualism. Barry Wellman’s website was very helpful in teasing this concept apart. According to the notion of networked individualism, there has been a three-fold information technology revolution that has influenced how we function as individuals in society. First, was the personal internet, second the growth of mobile access, and third the predominance of computer-mediated social networks. Networked individualism is the outcome. It refers to our growing tendency to operate as individuals in a network rather than as group members. This means that social activities are organized around the individual rather than the family or neighborhood. Each person has enhanced agency because they operate their own social network. Thus, individuals rather than groups are at the hub of social life.
Network vs. Group. What does it mean to function in a network like this, rather than in a group? It means, according to Rainie and Wellman, the following: we are more fragmented, maneuvering easily among networks; person-to-person contact becomes more important than meeting in groups or in a specific location; and we make decisions independently rather than via the group, but draw on our networks to seek relevant information. In essence, we are individuals surfing a vast and complex social web, and we have multiple “neighborhoods” comprised of the people we can text, tweet, email, and tag. These neighborhoods change according to our needs.
Families. Families, according to them, are also functioning more as a network than a group. We see each other less often than several decades ago, but actually are in closer communication due to mobile communication technologies (i.e., we’re emailing , texting, and calling each other a lot). Some have referred to the constant awareness that we can have of others as an electronic leash (Wellman, on his website, compares this to the ball and chain of the past).
So, reading this, I can’t help but picture busy family members texting and emailing all day, not getting home until late, missing the family dinner, and removing themselves to their respective rooms to get on their devices. I’m being silly here, and I don’t actually think this happens a lot (although I know from observation that some people’s family lives are much like this).
At the same time, the notion of the electronic leash is one that seems to be to be a double-edged sword. On one hand, we’re more connected. I like this in many ways. For example, being able to text my husband any little thought that enters my head is awesome (particularly when it’s of the “don’t forget to….” variety). He’s less excited about that aspect of the technology I imagine. On the other hand, my expanded social network takes a lot of time to keep up with, and I feel, often, that I have less time for my family and close friends unless I’m very strict and let a lot of messages/texts/tweets just go. I also find sometimes that I get in a mode of texting or emailing things to my close family and friends rather than talking. That’s fine for the sake of efficiency much of the time, but I can’t help but feel that I’m losing out on something more satisfying and on what I think of as the alchemy of face-to-face conversations – the unpredictable creativity and clarity that can happen when you just have an old-fashioned conversation.
Costs and Benefits. There is no doubt in my mind that we benefit from the ease of communication and the speed of information access. Also, personally, I love the ability to do more, communicate more, find out more, more, more!!! But the irony is that these tools can easily create just as many demands on our time as they relieve.
Rainie and Wellman seem, from the tenure of their writing, to be really excited about these changes. They seem to be saying (and I should be careful here, because I haven’t read the entire book yet) that these changes are already happening – we’re becoming more disconnected in terms of our membership in groups, communities, and even the family. However, social media technologies are helping us maintain connection in the face of this change, and may even foster more face-to-face time and social support. In a nutshell, we no longer live in villages, so why are we bemoaning the fact that we don’t know our neighbors anymore? Instead, through social media, we are empowered to have extremely large, rich, and diverse social networks that we can draw on to find the social support that we need.
Moreover, according to them, we are shifting to internet-based communities rather than in-person groups. Networked individuals tend to move around fluidly from one network to another rather than having a core community they are anchored in. People with whom you’re networked can change, turn over, and you probably have distinct networks for distinct purposes, rather than a deep connection with a few friends and relatives. That is, you figure out where you can get what you need among your multiple social networks, and go to them. As a result, there is more uncertainty and less loyalty, but also more freedom and maneuverability. You can choose to have everyone know what you’re doing, or maintain privacy and selectively inform people what you’re doing. This kind of social control is less commonplace in traditional social networks, where you are more “under surveillance.”
Questions. Some of this sounds good to me, some not so good. But there are some questions I’m hoping to soon read that the book is asking. For example, is this shift towards networked individualism really inevitable? What exactly are the costs (it seems to me at this point that Rainie and Wellman focus more on the potential benefits versus costs)? Are social media just helping us to stay connected or are they actually a powerful force in moving us towards more networked individualism? For whom are these changes good, and for whom are they bad (i.e., are there network mavens and network Elmer Fudds)? What is the difference between size of network and the quality of the network? What about the burden placed on us to keep up with large, disparate social networks, which for many people may be largely comprised of acquaintances? Is there less time and energy left over for “quality” interactions and true intimacy? I hope to report back soon to say that Rainie and Wellman consider these challenging questions.
We debated on the theme “social media are anti-social.” I was assigned to the team arguing in support of this point. I was unhappy with being asked to take this side – because I don’t agree with it! – but I was willing to do so with the understanding that I would argue that the very question of whether social media are anti-social is a faulty one. That is, like most technology, social media are neither good nor bad in and of themselves because the impact of social media depends on how they are used. Moreover, from a scientific standpoint, we know almost nothing about whether social media are actually making us more “anti-social” – less socially connected and less socially skilled.
After clearly stating this, however, my strategy was to highlight ways in which social media COULD be antisocial – emphasizing that the research to test these possibilities remains to be done. Perhaps that was one reason why we (my team mate BJ Mendelson and I) lost so spectacularly. At the same time, it was clear that the audience (whose votes determined the winning side) had already made up their minds before the debate even began. This was unsurprising because social media, as this era’s technological bugaboo, are absurdly polarizing. It’s either the scapegoat for all that is wrong, or the best hope for a utopian future. And of course, the truth is always somewhere in between.
Coincidentally, this very debate had just been played out in relation to an inflammatory Newsweek article last week called “Is the Web Driving Us Mad?” A flurry of responses emerged, including an online Time Healthland article calling into serious question the Newsweek article’s review of evidence that the internet “makes” people crazy. Essentially, the Newsweek article is accused of being sensationalistic rather than doing what responsible journalism is supposed to do: (a) impartially seeking out and weighing the evidence that exists with a careful eye to the quality and direct implications of the science being cited, and (b) avoiding quoting scientific findings out of context.
I believe, however, that there is so much polarized debate because the research we need to weigh in on these issues has not yet been conducted. And that was my main point in the debate. We know almost nothing about the cause and effect relationship between social media or the internet and mental health: Are these technologies making us crazy, depressed, anxious, etc,…, or are people who are already troubled in offline life troubled no matter what the context? How do we measure anti-social, or crazy, or any other outcome that reflects the well-being of an individual? The plethora of unanswered questions makes for polarizing journalism.
One interesting possibility that the Newsweek article brought up and which I considered in the debate was the idea that social media may influence us in ways that are more powerful than other types of technology because they tap into something that is fundamentally rewarding to humans (and most mammals!): the need to be socially connected with others.
I made the point in the debate that, “Science is finding that social media are so rewarding, so motivating, that they essentially “highjack” our brain’s reward centers – the same brain areas that underlie drug addiction- so that you see what all of us can attest to: people have difficulty disengaging from social media. They feel the need to constantly check their device for the next text, tweet, status update, or email. They feel obsessed. The documented existence of Facebook addiction attests to this. How many of us walk down the street, or eat dinner in a restaurant with our devices clutched in our hand or lying on the table right next to us like a security blanket. I know I do more often than I’d like.”
Indeed, we don’t walk down the street reading a book or watching TV. These technologies can be consuming, but the nature of social media – portable, brief, deeply social – creates a completely different set of temptations and rewards. Textbook theories of behavioral learning and reinforcement tell us that the way rewards are integrated into social media is a recipe for keeping us roped in. For example, if your goal is to, say, make a rat in a cage press a bar as frequently as possible you should do the following: every once in a while, in a completely unpredictable way, give a reward when the bar is pushed. In contrast, if you give rewards every time they push the bar, they’ll become sated and push less. If you reward in a predictable way, they’ll press the bar just enough to get the reward and no more – because they know how many times they need to press the bar before the reward comes.
Now think about how we use our devices. We check our devices frequently (analogous to pressing the bar) because we’re never sure when an important message, really good piece of news or fascinating factoid will pop up (analogous to the unpredictable reward). So, we find ourselves with device in hand “pressing the bar” over and over again, all day long. The whole economy of social media (i.e., the way the creators of these platforms make their money) is hugely dependent on this very fact.
Now I have to stop and give a MAJOR caveat: This idea may be compelling, sounds like it could be right, but, from my reading of the literature, there is very little directevidence that this is the case. All we know is that neurologically, aspects of social media and internet use are rewarding, calming, and pleasurable. It’s a far cry from “highjacking our brain,” a phrase I used in the debate for the sake of argument and hyperbole. At the same time, a growing number of people think this is a viable hypothesis, and one that we must put to the test.
By the end of the debate, I think we were all in agreement that when forced to pick a side, we could argue it. But really, we all felt the same thing: Whether social media are anti-social simply depends. It depends on who is using it, how they are using it, and why they are using it. And we just don’t have the scientific knowledge yet to understand these who’s, how’s, and whys.
I concluded my opening statement in the debate by saying, “Until we as a society spend the time, energy and resources to scientifically test how we are changed [by social media], we should proceed with caution and with the possibility in mind that social media could make us more anti-social.”
But BJ Mendelson may have summed it up best when he made a good old-fashioned fan boy reference: with great power comes great responsibility. We need to take the responsibility to look at, question, and try to understand the role of social media in our lives and in society.
Last week, I had the pleasure of being Freshly Pressed on WordPress.com – that is, I was a featured blog on their home page. As a result, I got more traffic and more interesting comments from people in one day than I have since I began blogging. Thanks, WordPress editors!
I’ve been really excited and inspired by the exchanges I’ve had with others, including the ideas and themes we all started circling around. Most of the dialogue was about a post I made on technology, memory, and creativity. Here, I was interested in the idea that the more we remember the more creative we may be simply because we have a greater amount of “material” to work with. If this is the case, what does it mean that, for many of us, we are using extremely efficient and fast technologies to “outsource” our memory for all sorts of things? – from trivia, schedules, and dates, to important facts and things we want to learn. What does this mean in terms of our potential for creativity and learning, if anything? What are the pros and cons?
I was fascinated by the themes –maybe memes? – that emerged in my dialogue with other bloggers (or blog readers). I want to think through two of them here. I am TOTALLY sure that I wouldn’t have developed and thought through these issues to the same degree – that is, my creativity would have been reduced – without these digital exchanges. Thanks, All.
Are We Becoming Cyborgs? The consensus is that – according to most definitions – we already are. A cyborg (short for cybernetic organism) is a being that enhances its own abilities via technology. In fiction, cyborgs are portrayed as a synthesis of organic and synthetic parts. But this organic-synthetic integration is not necessary to meet the criteria for a cyborg. Anyone who uses technology to do something we humans already do, but in an enhanced way is a cyborg. If you can’t function as well once your devices are gone (like if you leave your smart phone at home), then you’re probably a cyborg.
A lot of people are interested in this concept. On an interesting blog called Cyborgology they write: “Today, the reality is that both the digital and the material constantly augment one another to create a social landscape ripe for new ideas. As Cyborgologists, we consider both the promise and the perils of living in constant contact with technology.”
Yet, on the whole, comments on my post last week were not made in this spirit of excitement and promise – rather, there was concern and worry that by augmenting our abilities via technology, we will become dependent because we will “use it or lose it.” That is, if we stop using our brain to do certain things, these abilities will atrophy (along with our brain?). I think that’s the unspoken (and spoken) hypothesis and feeling.
Indeed, when talking about the possibility of being a cyborg, the word scary was used by several people. I myself, almost automatically, have visions of borgs and daleks (look it up non sci-fi geeks ;-)) and devices for data streaming implanted in our brains. Those of us partial to future dystopias might be picturing eXistenZ – a Cronenberg film about a world in which we’re all “jacked in” to virtual worlds via our brain stem. The Wikipedia entry describes it best: “organic virtual reality game consoles known as “game pods” have replaced electronic ones. The pods are attached to “bio-ports”, outlets inserted at players’ spines, through umbilical cords.” Ok, yuck.
There was an article last month on memory and the notion of the cyborg (thank you for bringing it to my attention, Wendy Edsall-Kerwin). Here, not only is the notion of technological augmentation brought up, but the notion of the crowdsourced self is discussed. This is, I believe, integral to the notion of being a cyborg in this particular moment in history. There is a great quote in the article, from Joseph Tranquillo, discussing the ways in which social networking sites allow others to contribute to the construction of a sense of self: “This is the crowd-sourced self. As the viewer changes, so does the collective construction of ‘you.’”
This suggests that, not only are we augmenting ourselves all the time via technology, but we are defining ourselves in powerful ways through the massively social nature of online life. This must have costs and benefits that we are beginning to grasp only dimly.
I don’t have a problem with being a cyborg. I love it in many ways (as long as I don’t get a bio-port inserted into my spine). But I also think that whether a technological augmentation is analog or digital, we need to PAY ATTENTION and not just ease into our new social landscape like a warm bath, like technophiles in love with the next cool thing. We need to think about what being a cyborg means, for good and for bad. We need to make sure we are using technology as a tool, and not being a tool of the next gadget and ad campaign.
The Second Brain. This got a lot of play in our dialogue on the blog. This is the obvious one we think about when we think about memory and technology – we’re using technological devices as a second brain in which to store memories to which we don’t want to devote our mental resources.
But this is far from a straightforward idea. For example, how do we sift through what is helpful for us to remember and what is helpful for us to outsource to storage devices? Is it just the trivia that should be outsourced? Should important things be outsourced if I don’t need to know them often? Say for example, I’m teaching a class and I find it hard to remember names. To actually remember these names, I have to make a real effort and use mnemonic devices. I’m probably worse at this now than I was 10 years ago because of the increased frequency with which I DON’T remember things in my brain now. So, given the effort it will take, and the fact that names can just be kept in a database, should I even BOTHER to remember my students’ names? Is it impersonal not to do so? Although these are relatively simple questions, they raise, for me, ethical issues about what being a professor means, what relating to students means, and how connected I am to them via my memory for something as simple as a name. Even this prosaic example illustrates how memory is far from morally neutral.
Another question raised was whether these changes could affect our evolution. Thomaslongmire.wordpress.com asked: “If technology is changing the way that we think and store information, what do you think the potential could be? How could our minds and our memories work after the next evolutionary jump?”
I’ll take an idea from andylogan.wordpress.com as a starting point – he alluded to future training in which we learn how to allocate our memory, prioritize maybe. So, perhaps in the future, we’ll just become extremely efficient and focused “rememberers.” Perhaps we will also start to use our memory mainly for those types of things that can’t be easily encoded in digital format – things like emotionally-evocative memories. Facts are easy to outsource to digital devices, but the full, rich human experience is very difficult to encode in anything other than our marvelous human brains. So if we focus on these types of memories, maybe they will become incredibly sophisticated and important to us – even more so than now. Perhaps we’ll make a practice of remembering those special human moments with extreme detail and mindfulness, and we’ll become very, very good at it. Or, on the other hand, perhaps we would hire “Johnny Mnemonics” to do the remembering for us.
But a fundamental question here is whether there is something unique about this particular technological revolution. How is it different, say, than the advent of human writing over 5,000 years ago? The time-scale we’re talking about is not even a blink in evolutionary time. Have we even seen the evolutionary implications of the shift from an oral to a written memory culture? I believe there is something unique about the nature of how we interact with technology – it is multi-modal, attention grabbing, and biologically rewarding (yes, it is!) in a way that writing just isn’t. But we have to push ourselves to articulate these differences, and seek to understand them, and not succumb to a doom and gloom forecast. A recent series of posts on the dailydoug does a beautiful job of this.
Certainly, many can’t see a downside and instead emphasize the fantastic opportunities that a second brain affords us; or at least make the point, as robertsweetman.wordpress.com does, that “The internet/electronic memory ship is already sailing.”
So, where does that leave us? Ty Maxey wonders if all this is just leading to a technolobotomy – provocative term! – but I wonder if instead we have an amazing opportunity to take these technological advances as a testing ground for us to figure out as a society what we value about those capacities that, for many of us, are what we think make us uniquely human.
Many of us know the feeling of being addicted to our devices. Walking down the streets of Manhattan, I’ve done head counts of the percent of passers-by that are either clutching (like a security blanket) or using their handheld devices –the number ranges between 20% and 75%. And I usually have to count myself!
But recently, researchers have come out with a Facebook Addiction Scale, suggesting that it might not only be an addiction to our devices, but an addiction to what social media do for us and make us feel – an addiction to being connected.
A simple and elegant definition of addiction is “The continued use of a mood altering substance or behavior despite adverse consequences.” Does this apply to Facebook use? Here are some of the things to look out for:
You’re preoccupied with Facebook when you’re not online – This has to do with spending “a lot” of time thinking about Facebook or making plans to use Facebook. I’m not sure what counts as a lot – and I don’t think the measure specifies – but this is about the feeling that you are putting excessive mental energy into Facebook.
You need more Facebook time to get the same pleasure from it – This is called tolerance in the addiction literature. It includes getting “sucked in to” and spending more time on Facebook than intended: like when you’ve logged on to Facebook and then all of a sudden two hours have gone by; or when you feel the compulsion to check Facebook every two minutes. And key to this is that you often have the urge to use Facebook, and find that you have to use Facebook more and more to get the same pleasure from it. Has anyone ever felt a Facebook high? ……
You use Facebook to feel better or forget about your problems – Like others might drink a glass of wine, pop a pill, etc.,… Personally, I don’t use Facebook in this way, but I imagine it’s like my feeling when I log on to Amazon. I’m not much of a shopper, but being able to get exactly what you want immediately – whether it’s socks or a power wheel for your kid (yes, a big green, awesome power wheel car; all terrain!) – is extremely soothing to me. A study showed that when people use Facebook, physiological signs of stress are reduced. This isn’t necessarily a bad thing – it might even be good – but it could be part and parcel of the addictive process.
You have tried to reduce your Facebook use without success – Here, the questionnaire is sussing out whether you’ve identified it as a problem, and have tried to cut down on your use of Facebook, but have fallen off the wagon. This is the “uh-oh!” moment.
You experience withdrawal feelings when you don’t use Facebook – This is one that might make more sense for younger people, because one of the key questionnaire items for this issue is “Become restless or troubled if you have been prohibited from using Facebook?” I presume they mean prohibited by parents, but perhaps loved ones could be doing the same (see #6 below). These feelings of dis-ease are a sign that dependence is present.
You find that your use of Facebook has had a negative impact on your life – This final dimension is important for putting the label of addiction on Facebook use – it’s getting in the way of having a healthy life. This includes using Facebook so much that your job/studies/or relationships are suffering. It also includes Facebook taking the place of other important things, such as hobbies, leisure activities, and exercise. Have you ever ignored your partner, family members, or friends because of Facebook?
I bet many of us have shown at least one of these warning signs at some point. Should we be worried? Probably not, unless Facebook use is getting in the way of being a functional person. On the other hand, having even one warning sign should perhaps give us pause.
Some preliminary studies suggest that Facebook addiction occurs more frequently in people who have other addictive problems (no surprise there), as well as among the younger and older. People who worry or are socially anxious also may be at more risk, perhaps because they find Facebook to be an easier way to connect with others. Procrastinators beware – Facebook addiction may also be just another way to avoid work.
So, consider this a public service message. If you feel these warning signs apply to you, it might be time to give it a rest. Or just switch to Pinterest.
I was having an online dialogue with my friend Mac Antigua about how being an active social media and technology user can change how we relate to the world, and can make us feel that we are always on stage. He directed me to an interesting post about digital classicism.
The whole exchange made me think a lot about how the line between our offline “real life” and our lives online is becoming more blurred. Is there even a need to make this distinction? Isn’t the way we conduct ourselves online just an extension of who we are offline? The answer to this is complex, but I think, nicely summed up in t-shirts that my husband Vivek and I saw all over Bangkok when we visited in 2003 – “Same, same, but different.” At the time, we were pretty puzzled by it but found ourselves constantly quoting it. Later, we found out it’s a common Thai-English phrase meaning just what it sounds like.
I feel like life online is just like this – same, same, but different. How we interact, how we create identity, how we feel special and understood online is the same, same but different from our offline life. Here are three examples of this:
1. What counts as clever. In the offline world, being clever usually involves being quick-witted: having the fast comeback, thinking on your feet, etc,….But online, you have oodles of time to compose, rewrite, think about, and edit every comment you make. Self-presentation becomes a long-term process rather than a series of quick, face-to-face exchanges that “disappear” as soon as they have happened. These disappearing impressions are what used to be the basis of our views about each other. Perhaps no more. That’s not to say that many of us don’t dash off the spontaneous tweet or post. It’s just that when we’re trying to be clever, we can take our time about it.
This is nice in some ways, because it has an equalizing effect and gives those of us who are shy or just not speedy thinkers time to express what we mean. This feels like a healthy slowing down. On the other hand, for young people growing up today, does this create less of challenge to their conversational skills? – and conversational skills are definitely learned and need to be practiced. Are kids going to be less able to carry on conversations that occur in real time than their counterparts a decade ago?
At the same time, does the knowledge that everything you post will be documented (forever) create a whole new set of pressures? These pressures are making some young people “drop out” of digital communities like Facebook: Just too much work and scrutiny. It’s nerve-wracking, trying to be clever.
2. It’s OK to brag. I’m actually not sure that it is OK to brag in online communities, but I see a lot more of it online than offline – even though I live in what is perhaps the bragging capital of the world, New York City. For example, when I first started tweeting, I was surprised that people were spending so much time retweeting posts that others made about them, or tooting their horn about something or other. In the offline world, if someone started saying things like – “Oh, so and so just mentioned what an awesome researcher I am!” – multiple times a day, I would think they were disturbingly self-involved and ego-centric.
This seems to be an important difference between online and offline, because one of the purposes of the digital social network is to get yourself and your work “out there.” So, perhaps this is exactly what people should be doing. Does this mean that social mores about bragging may be changing? The interesting thing to watch will be whether these tendencies trickle down into our offline lives.
3. Being cool. I’m no expert on cool, but it seems to me that how people are cool online is quite different than the traditional ways of being cool. Online, cool seems to be defined by the number of friends/followers/connections you have, as well as your sheer presence in terms of posts. It’s about how interesting a conduit of information and cutting edge ideas you are. Cool also is something you have time to work at since very little is spontaneous (see #1 above).
In contrast, few are being the strong, silent, aloof type, full of self-confidence and self-control (think James Dean). Instead, everyone seems to be shouting from the rooftops (or whatever the digital analogy would be) what they think and feel and see. It’s a very “look at me” world on-line, not a subtle world of understatement and innuendo. This is a world in which people live out loud, the louder the better.
Online heroes seem to act the same way as us regular folk in this regard – and maybe even worse because of what can be at times their oblivious self-importance. I once followed an actor on Twitter for all of 10 minutes before unfollowing him because the first tweet of his that I read was about the enormous bowel movement he just had. Seriously.
Of course, there is a lot of variability in how people behave online, but based on my observations, this non-James Dean way of being seems to be the norm. One reason for this shift in cool may be that online, tech-savvy geeks rule the world, so the definition of cool has altered to fit their goals and ways of being. Another may simply be a function of the technology. You can’t be strong and silent online because you would never post anything – and you therefore wouldn’t “exist.” One must be active and one must be taking a chance by putting oneself out there.
This breaking down of cool, in this sense, seems cool to me – when it’s not annoyingly self-involved. And honestly, it is NEVER cool to tweet about your poo.
I’ve started a research project on the impact of social media on our social and emotional lives. When I first began, I carefully considered Sherry Turkle’s work. For the past 15 years, she has written passionately about our evolving relationship with technology. Most recently (see her TED talk), she argues that the way we are using technology, in particular social media, has created “disturbing new habits” that have the potential to make us feel more alone rather than more connected. In other words, we are getting used to being alone, together – being with each other, but elsewhere at the same time. If you’ve ever sat at a table or in a room where everyone was busy on their devices rather than talking with each other, you know what she’s getting at.
I think Sherry Turkle has a lot of important things to say. But she is a divisive character. She does not mince words about what she thinks the implications of our technology habits are in terms of our psychological well-being – more alienation, more aloneness, loss of a capacity for solitude, and stunted development of some of the most basic of social skills, like having a conversation. What is easy to forget, however, is that she also argues that these are habits that we can all change – if we choose to take a look at how our devices not only change what we do but change who we are.
It’s also important to remember that her research is entirely qualitative and anecdotal. Lab-based and quantitative research remains to be done to test her hypotheses. Below, I list a few ideas that she highlights, along with my ideas about how her hypotheses could actually be tested by empirical, lab-based research. For the record, these issues are not exactly what I am studying now, but stay tuned for blog posts that give you my results hot off the data presses.
1. Social media is a flight from conversation. This is the notion that the more we text, post, and email, the less we actually take time to talk with people. An important issue here is that having a conversation is a skill – one that we learn through practice. So, where does that leave the kids today, who are trying to gain these skills? Are they going to be a bunch of Neanderthals communicating in non-grammatical text-ese? Probably not – that’s the future dystopia vision – but how will the Millenials learn to communicate?
One way to test this is to actually track teens over time, during periods that are critical for building conversational skills (early adolescence maybe). Then, analyze how differences in the frequency and types of social media use correspond over time with conversational skills and abilities (measured via existing IQ tests that tap verbal comprehension and production or measured via some newly developed measure). The longitudinal component is very important here because if you are looking at social media use and conversation skills at the same time, you can’t draw causal conclusions (i.e., it could just be that those with fewer conversational skills prefer the ease of social media). In contrast, by looking at how social media use predicts a trajectory of conversational development over time, you have firmer ground to stand upon if you conclude that social media use is causing conversational deficits. If supported, such findings lead to a lot of other important questions – like what do we do about it?
2. We are drawn to social media because we can have the illusion of companionship without the demands of friendship. This is tricky to study empirically because there are several very subjective components to this. One is that we need to measure peoples’ goals accurately – e.g., that people are using social media to gain a sense of companionship. This is self-report based, and there are issues like presentation biases (people might not want to admit why they use social media) that could make such things difficult to measure accurately. Secondly, how do you get at how people feel about the demands of friendship? Will research participants report – “Oh, yes, it’s just too hard dealing with my brother’s emotional demands over the phone all the time. Much easier to text.” Well, maybe some of us would articulate this, but many others might not even be aware that this is what they are doing.
So, in addition to asking people to report on their goals and motivations for social media use, we need to get at implicit processes that they may not be fully conscious of. In the psychology literature, there are tasks such as the Implicit Association Test (IAT). The IAT requires users to make a series of rapid judgments, which researchers believe might reflect attitudes that people are unwilling to reveal publicly. For example, in gender bias research, the IAT has been used to show that most people associate women more strongly with family and men more strongly with careers. Could the IAT be used to examine attitudes towards social media and friendship?
3. We no longer want to give our full attention to anything, and our devices are the way to escape the “boring bits.” This is also tricky. A lot of recent research has examined multi-tasking in terms of whether it compromises your performance on the tasks you’re trying to do at the same time. The answer is: It does. But, in our hearts, we all knew that, didn’t we?
The issue here, though, is somewhat different than multitasking. It’s about the motivation to multitask. The idea is that we multitask to escape boredom, keep our minds busy and moving at all times. Maybe some of us do it because of low boredom threshold, feeling uncomfortable with our thoughts, or having so much to do that any time we feel there is an “empty” moment, we try to fill it. There are lots of possibilities. But how do we study this? One way might be to actually put people in a boring situation (some staged boring lecture), with their devices, and see when and if they use them. If they do, ask them about the goals they were trying to meet (I had to answer that one email that was in the back of my mind; I was bored, and wanted to see what was on my twitter feed). Once we have systematic responses to a real-life scenario from multiple people, we can start to seek out trends in the data.
But, this isn’t so satisfying. So, what if we add some biological measures to get at how using the device changes how we actually feel? Now we’re getting somewhere, because this reveals what using devices “buys us” and why we feel almost addicted to our devices at times. For example, one study showed that using Facebook decreases your physiological signs of stress – it calms you down. But in contrast, as I mentioned in a previous blog post, a study published in the January edition of Evolution and Human Behavior found that when girls stressed by a test talked with their moms, stress hormones dropped and comfort hormones rose. When they reached out to their moms via IM, however, nothing happened. Thus, IM’ing with their moms was barely different than not communicating with them at all- was ineffective in conveying comfort. Taken together, these types of studies help get at why (and why not) devices become an integral part of how we cope, and of our emotional lives.
Bottom line. These are all just ideas. But I believe the bottom line is this: We need the Sherry Turkles of the world to help identify these issues and develop compelling hypotheses (and we need those who would disagree with her), but we also need people to, literally, put these ideas to the test.
In addition to being a shower blogger (see post from two weeks ago), I am also an exerblogger – I talk through blog ideas when I exercise with my trainer Blair. Mostly it’s to take my mind off the unpleasant task of exercising, but really it’s because I have a captive audience – Blair – who is my 20-something sounding board. Blair is not a huge social media user, but like many of his generation, it’s just part and parcel of his social life and the way he thinks about the world. The topic last week was the psychological spotlight.
The notion is that when we use social media, the things we do and say, the way we look, and the things we find interesting seem to have a heightened importance and to be under scrutiny. That is, we know that our lives can be transmitted (by us or others) at any time to the social network, to be seen, heard, and evaluated. So, psychologically, we’re always on stage, in the spotlight. And if we’re always on stage, then maybe, on some level, we are acting and not being fully authentic. Using social media can sometimes feel like being a celebrity walking down the street who knows that the paparazzi are always waiting around the corner.
And this is what is new about social media compared to previous ways of connecting with others – we can share just about anything, via a wide range of media, extremely easily. We can be seen and heard whenever we want. And, in turn, we can be nosy parkers and learn a lot about others whenever we want. Decades ago, in her collection of essays, On Photography, Susan Sontag argued that photography creates in people a “chronic voyeuristic relation” to the world around them. But Ms. Sontag did not imagine the level to which social media could take both our voyeuristic and exhibitionistic impulses.
My 3-year-old already gets this, although he doesn’t yet use social media. For him, the impulse to document and to be seen is fully entrenched – “Mama, take a video,” he says, every time he is doing something “cool.” This could be dancing, building blocks, making a funny face, kissing his sister, anything. And every video on demand (that is, he demands the video) ends with my son walking towards me and the device I’m holding to video him saying, “Can I see it? Can I see it?”
And this is what gets me wondering. Am I raising my son to be more self-conscious, more of an exhibitionist, and less authentic about what he says and does, because he knows he will be documented? Because he feels that he is on stage? Does he think he’s special just because he’s being recorded? Maybe not – all kids like to be seen, and among other things, it’s super cute and fun. But the ease of documentation and of sharing with others has taken this natural impulse to a whole new level.
This issue is similar to the debate about self-publishing discussed in a New York Times article over the weekend. The question raised was this: when parents pay to make their children “published authors,” are they giving children a false sense of self-esteem to the point of self-aggrandizement? Are we ironically, not preparing them for the rigors and tough knocks and rejections of the real world by making everything too easy? The self-esteem issue here is central because these published child authors feel famous, feel seen because their books are read. They are on stage.
I think there are no clear answers to these issues. I do, however, think that most of us would agree that being on stage is a deeply rooted impulse in our culture today – from reality television to You Tube to Facebook, this has been going on for a long time. Think back to America’s Funniest Home Videos (wait, is that still on?). I’m not saying this impulse is new, or necessarily bad, but the more central the psychological spotlight becomes to how we all operate, the more we need to take time to understand what it means.