Blast from the Past: The Game Doesn’t Care: Why the Gamification of Mental Health Isn’t Working (Yet)

This post is from 5 years ago, July, 2013. I believe we’re all still thinking about and struggling with these same issues today! 

Games that are not games. There is a serious barrier to the effective gamification of mental health. This barrier is that the games we psychologists and health professionals are coming up with are not fun. In fact, they are totally uncool, border on the condescending, and wouldn’t motivate anyone to play for more than 30 seconds. This is the case even though the bar is set quite low because these “games” address things that people really want, like boosting our intelligence and memory, reducing depression and stress, quitting smoking, … fill in the blank. boring gameI’ve been fascinated with this disconnect between Psychology’s view and real-world acceptability. This disconnect is plaguing other fields as well, such as in the development of “serious games” for education. In this larger context, I’ve been working on the development of an app that takes a scientifically proven approach to reducing stress and anxiety, and embeds the “active ingredient” of this intervention into a game that is fun – fun enough, we hope, for someone to want to play for much more than 30 seconds.

Fun versus health goals. In the midst of  this ongoing development process, I had the pleasure of speaking with Nick Fortugno, co-founder of the game design company Playmatics. In addition to creating really fun games, like Diner Dash, he has created games to promote positive social change and is one of the visionary and forward-thinking advocates for the idea that serious games can and should be fun. So, he has a deep understanding of the barriers facing the gamification of mental health. As we were talking about these barriers, Nick said something that really got me thinking. He said, when we design games for education or health, we have to remember that “the game doesn’t care” about whether we’re making progress towards our goal. In other words, a game isn’t fun because it meets some criterion that we, the developers, have for success – like boosting our ability to remember, reducing symptoms of anxiety, or losing 5 pounds. A game is fun because it creates an aesthetic experience and facilitates game play that we want to come back to again and again. Therefore, I would argue that a “serious” goal embedded in a truly fun game is reached as a by-product of the fun.

The need for backward engineering. I think I am accurate in saying that very few people, myself included, who are trying to create serious games for wellness think like this – i.e., like a game designer – about the process of gamification. From what I can tell, game designers think very deeply about the experience they want the game to promote, and then they work through the pragmatics of the game play that will facilitate this experience. This backward engineering from the point of view of the aesthetic/experiential goal to the pragmatics of the game is the opposite of what psychologists do when they think about gamification. Instead, we have parallel streams of development in which (a) we know that our “game” (read scientific protocol) is truly boring, and (b) we have to somehow decrease the snore factor. We think: “Hm, here is my very rigid experimental protocol/computerized intervention. I must overlay this protocol with some cute little animated guys, perhaps with a fun back-story (wizards? aliens?) and then make sure users get points when they conform to the requirements of the protocol.” Sounds thrilling, huh? So fun? Exactly the recipe for the next Dots? Right…. So, we have a lot to learn from game designers, and I believe that crucial to the future of the endeavor of gamifying mental health is partnering with people who know how to create fun and understand the process of game design.

Pocket rituals. What would it be like if we created mental wellness tools, or even interventions for serious mental health problems, that were truly fun and that could become part of our array of habits and strategies for feeling better, reducing symptoms, performing more efficiently, or dealing with stress?  These games, if “snackable” would become our pocket rituals, our chill pills. We could take out our device for 5, 10, or 15 minutes and be empowered to bring about a targeted, appreciable positive impact. The barriers to use should be minimal, the experience intrinsically rewarding – that is, it feels good to play – as well as reinforcing because it helps us meet our health goals. I think many psychologists feel that this approach is not easily conducive to a rigorous scientific approach. But if we fail to find a way to do this – good science and giving people tools they want to use – then the whole endeavor is dead in the water.

The Game Doesn’t Care: Why the Gamification of Mental Health Isn’t Working (Yet)

Games that are not games. There is a serious barrier to the effective gamification of mental health. This barrier is that the games we psychologists and health professionals are coming up with are not fun. In fact, they are totally uncool, border on the condescending, and wouldn’t motivate anyone to play for more than 30 seconds. This is the case even though the bar is set quite low because these “games” address things that people really want, like boosting our intelligence and memory, reducing depression and stress, quitting smoking, … fill in the blank. boring gameI’ve been fascinated with this disconnect between Psychology’s view and real-world acceptability. This disconnect is plaguing other fields as well, such as in the development of “serious games” for education. In this larger context, I’ve been working on the development of an app that takes a scientifically proven approach to reducing stress and anxiety, and embeds the “active ingredient” of this intervention into a game that is fun – fun enough, we hope, for someone to want to play for much more than 30 seconds.

Fun versus health goals. In the midst of  this ongoing development process, I had the pleasure of speaking with Nick Fortugno, co-founder of the game design company Playmatics. In addition to creating really fun games, like Diner Dash, he has created games to promote positive social change and is one of the visionary and forward-thinking advocates for the idea that serious games can and should be fun. So, he has a deep understanding of the barriers facing the gamification of mental health. As we were talking about these barriers, Nick said something that really got me thinking. He said, when we design games for education or health, we have to remember that “the game doesn’t care” about whether we’re making progress towards our goal. This elegant idea highlights the fact that a game isn’t fun because it meets some criterion we have for success – like boosting our ability to remember, reducing symptoms of anxiety, or losing 5 pounds. A game is fun because it creates an aesthetic experience and facilitates game play that we want to come back to again and again. Therefore, I would argue that a “serious” goal embedded in a truly fun game is reached almost as a by-product of the fun.

The need for backward engineering. I think I am accurate in saying that very few people, myself included, who are trying to create serious games for wellness think like this – i.e., like a game designer – about the process of gamification. From what I can tell, game designers think very deeply about the experience they want the game to promote, and then they work through the pragmatics of the game play that will facilitate this experience. This backward engineering from the point of view of the aesthetic/experiential goal to the pragmatics of the game is the opposite of what psychologists do when they think about gamification. Instead, we have parallel streams of development in which (a) we know that our “game” (read scientific protocol) is truly boring, and (b) we have to somehow decrease the snore factor. We think: “Hm, here is my very rigid experimental protocol/computerized intervention. I must overlay this protocol with some cute little animated guys, perhaps with a fun back-story (wizards? aliens?) and then make sure users get points when they conform to the requirements of the protocol.” Sounds thrilling, huh? So fun? Exactly the recipe for the next Dots? Right…. So, we have a lot to learn from game designers, and I believe that crucial to the future of the endeavor of gamifying mental health is partnering with people who know how to create fun and understand the process of game design.

Pocket rituals. What would it be like if we created mental wellness tools, or even interventions for serious mental health problems, that were truly fun and that could become part of our array of habits and strategies for feeling better, reducing symptoms, performing more efficiently, or dealing with stress?  These games, if “snackable” would become our pocket rituals, our chill pills. We could take out our device for 5, 10, or 15 minutes and be empowered to bring about a targeted, appreciable positive impact. The barriers to use should be minimal, the experience intrinsically rewarding – that is, it feels good to play – as well as reinforcing because it helps us meet our health goals. I think many psychologists feel that this approach is not easily conducive to a rigorous scientific approach. But if we fail to find a way to do this – good science and giving people tools they want to use – then the whole endeavor is dead in the water.

The Medium is the Message: On Mindfulness and Digital Mirrors

I recently had the pleasure of doing a talk-back with Congressman Tim Ryan on the role of mindfulness – focusing your awareness on the present moment – in education, as part of the Rubin Museum’s Brainwave Festival in NYC. The film, called “Changing Minds at Concord High School,” followed an entire school as they took part in a mindfulness training program. This school is unique in that it is a transfer school, a last stop for many kids with a history of school failure and discipline problems. The twist here is that the students both filmed the experience and conducted a study – of their classmates! – comparing the effects of mindfulness training with that of a placebo. We also included a science curriculum on the neuroscience of mindfulness – how it can change our brains for the better. I was the lead scientist on this project, so the kids were my “research assistants.” The project was spearheaded and directed by the amazing Susan Finley and filmed by the equally inspiring Peter Barton (with the help of the students). Our outstanding scientific advisors were David Vago and Robert Roeser. There is a lot that was amazing about this project, these kids, and this film. I want to focus on just one aspect, which hinges on the phrase “The medium is the message.”

lake yoga

The medium is the message. This phrase was coined by Marshall McLuhan who put forward the idea that the “form of a medium embeds itself in the message.” That is, the medium in which we experience something influences how we perceive the take-home message. Using movies as an example, he argued that the way in which this medium presents time has transformed our view of time from something that is linear and sequential into something that reflects patterns of connection across people and places. I am obviously no film theorist, but I apply this notion to the idea that different media provide us with an array of tools that can help us create a narrative of ourselves and the world that is unique to that medium.

Film and self-identity. In the case of our film “Changing Minds at Concord High School,” I believe that one way that the medium was the message for our students was that film is able to portray individual identities as being truly flexible and changeable. I think that the teens at Concord High, many of whom have experienced tremendous challenges, stress, and obstacles in life, didn’t believe as a group that change for them was really possible. But what our program strove to do, using converging media – film, scientific readings, mind/body experiences of mindfulness – was to convince these young adults that they really could change their brains, change counterproductive habits of thinking, and find the tools to focus more and let negative feelings go. As we move on to Phase 2 of the project by refining and developing our program, we are asking the fundamental question: How can we best use these tools to teach teens to view themselves and the world differently, creating a narrative in which personal change is possible?

Our digital mirrors. I think these issues are especially important to consider now, in this era of social media and reality television in which we crave to see ourselves reflected back to ourselves. We can criticize this, and analyze this, but the fact of it borders on the irrefutable. We know that it’s easier than ever before to document our lives via pictures and videos on our mobile devices, and share them with our digital networks. And we love to do so. Social media, through which we share our images of ourselves and our lives, are an immeasurably huge and complex array of mirrors into which we can gaze at ourselves. There may be costs and benefits to this, but it simply is. The power of this, however, is that we now have a new set of tools to curate our beliefs about who we are – hopefully for the better. And perhaps we believe this evidence of who we are more strongly because it is concrete, it is documented, it receives “likes” and is seen by others and thus is real. I’m liked therefore I am.

This digital infrastructure also provides a profound opportunity for those trying to support growth and positive change in youth. If we help youth document the possibility of change – like we did in “Changing Minds at Concord High School”- they may start to believe it applies to their own lives. This is particularly important for those of us who aren’t used to feeling that the world is full of possibilities. In this way, social networking may be a medium that gives the message that change is possible and that our limitations are as fluid as the flow of information.

Gamifying Mental Health or: Mental Health – We Got Game

I just attended the second annual Entertainment Software and Cognitive Neurotherapeutics Society (ESCoNS) conference. Say that five times fast.  This conference brought together people in the gaming world with cognitive neuroscientists. I went because I’m developing (and testing) an app that I believe can help people reduce stress, worry, and anxiety in their lives. In addition to more deeply exploring how to make mental health truly fun, I felt that I was seeing the future of mental health unfolding before my eyes.

Gamifying mental health

Here are four ideas I think will change how the field of mental health will look in a decade (or less):

1. Mental health care WILL BE gamified. The mobile revolution and app zeitgeist have changed how we get things done. We want an app for everything because we want our life mobile and streamlined, and the minute we think we want to do something, we want a device to help us do it. We also are trusting ourselves (and our networks) more and professionals less. This is the self-help movement taken to a new level. If we can seek mental health support on our devices rather than through a professional, more of us will do so. This plays into our growing tendency to feel more comfortable with devices than with others – this may be good or bad, or somewhere in between, but this is how it is.  I believe that it is not whether mental health care will be gamified, it is only a question of how and when.

2. Fun will motivate mental health treatment seeking. Scientists interested in human beings understand how to break something down into its component parts (whether an idea, a behavior, or a biological response) to study it, but scientists are not trained to construct something that is fun and that motivates people to come back again and again. That is art and intuition, combined with a lot of experience and good old-fashioned luck. If we want to reach the greatest number of people, and help them integrate mental health interventions into their lives, we need to make mental health fun.

3. Training your brain….with video games? The idea that you could train your brain with video games is still perceived by many to be in the realm of science fiction. But if you think about the fact that every experience we have, particularly repeated experiences, change our brains – why wouldn’t a video game? This reflects the important concept of neural plasticity – that the structure and function of the brain is malleable and changeable not just in childhood, but throughout the lifespan. In addition to games that can train different abilities (e.g., attention in kids with ADHD) technologies like virtual reality are being used as safe and effective ways to treat everything from addiction to post-traumatic stress disorder.

 4. The Emotional Brain is a “buzzing” target for intervention. In the 20th century, psychology was dominated by cognitive theories of how the brain works and what causes mental illness. Emotion was a little blip on the screen, an irrational irritant to the otherwise rational, predictable, and orderly domain of the thinking mind. Now, that irritant is an increasingly important focus of research. For example, not much more than a decade ago, economic decision making was understood as a “rational” process. Now it’s assumed that emotions influence our decisions, for better and for worse, and the task is to figure out how. The effect of emotion is not “irrational.” Rather, it reflects the fundamental integration between our ability to feel and to think. Without one, the other is deeply impoverished. As an emotion researcher, my colleagues and I are happy everyone has caught up – it’s about time! Emotions are the engines of our lives – and of psychopathology. No real living happens in an emotional vacuum.

It was clear to me from the conference that there is an emerging field in which the gaps between clinical psychology, cognitive neuroscience and entertainment are being bridged. This field is fundamentally interested in the emotional and social brain and “healthy emotional brain architecture” will be the goal of many computerized, gamified interventions. Increasingly, people predict a (near) future in which games will routinely be prescribed in the doctor’s office, and may eventually replace the office visit. If we can change our emotional brains, we can change ourselves. At least, that’s what many are counting on.

 

Islands in the Stream: A Meditation on How Time Passes on Facebook

Shortly after the terrible tragedy in Newtown, I received email notifications that my (designated) close friends on Facebook had made status updates. Scrolling through my news feed, my friends expressed the range of emotions that we all felt – horror, sadness, distress, anger, and confusion. Later that day, I popped onto Facebook again and was jarred and a little upset to read that friends who seemed to have just expressed horror and heartbreak were now posting about every day, silly, and flippant things.

Now, why should I be jarred or upset? Hours had gone by. After three, or six, or ten hours, why wouldn’t we be in a different emotional state, and why wouldn’t it be ok to post about it? I started to think that it was not my friends’ posts that were at issue here. Rather, it was the nature of how I perceive the passage of time and sequence of events on Facebook. A couple aspects of this came to mind:

Facebook time is asynchronous with real time. Time is easily condensed on Facebook. Events and updates that might be spread out over the course of a day or several days can be read at a glance, and therefore seem to be happening almost simultaneously. So, our perception of time on Facebook is a combination of how frequently our friends post and how frequently we check in. For example, say I check in twice in two days – at 9am on day 1 and at 9pm on day 2. I know a good bit of time has passed (and the amount of time that has passed is clearly indicated next to friends’ updates), but I still read each status update in the context of the previous ones – especially if I pop onto a friend’s Timeline instead of my news feed.

With this type of infrequent checking, friends’ updates about their varying and changing emotions (which might be reasonably spread out over the course of a day or multiple days) appear to be an emotional roller coaster. If someone has several posts in a row about the same thing, even if they are spaced days apart, the person comes across as preoccupied with the topic. Somehow, I form a view of this individual that brings these little snippets together into one big amorphous NOW. If I were checking more frequently, however, perhaps I wouldn’t lump updates together in this way. I’d “feel” the passage of time and – more accurately – see that the ebb and flow of status updates are like islands in the stream of our lives rather than a direct sequence of events.

RiverIslands_2

Related to this first point, it occurred to me that status updates are not meant to be interpreted in the context of preceding status updates. Our brains are pattern recognition machines. So, if Facebook status updates follow one after the other, our brains may perceive a direct sequence of events. But, each status update is a snapshot of a moment, a thought, or a feeling. Intuitively, they are supposed to be stand-alone, not readily interpreted in the context of a previous update, even if they occur close together in actual time. Think how different this is from our face-to-face interactions, in which sequence of events matter. For example, imagine that you’re at work, and your co-worker tells you she is on pins and needles waiting to hear back about a medical test. When you see her a few hours later, she is joking and laughing. You assume she either (a) got some good news from the doctor, or (b) is trying to distract herself from the worry. You don’t think she’s just having a good time, out of context of what you learned about her earlier in the day. But this contextualization is not the way it works on Facebook. Linkages between updates are tenuous, connections malleable. We can lay out our stream of consciousness in a way that requires no consistency among updates. Maybe the temporal and logical requirements of the off-line world are suspended on social networking sites like Facebook. Maybe our brains need to catch up.

Through a Glass, Darkly; But Then Face to Face: Sensitive Souls and Social Media

There is an idea out there that’s prevalent but which has little or no scientific support:  that people who use more social media are less sensitive, less empathic, and less emotionally attuned. My students Lee Dunn, Amy Medina and I wanted to put that assumption to the test (and reported these findings at the Society for Psychophysiological Research Annual Conference). We found the opposite: that people who prefer to use technology like social media to communicate with others are actually more emotionally sensitive and more empathic. These folks aren’t emotionally stunted or disconnected. If anything, they are more attuned to their emotions and to the emotions of others, and also might be more challenged by these emotions. They are “sensitive souls.”

This makes sense when you start to think about how hard face-to-face interactions can be.When we use social media, we may feel in control and safe compared to face to face. Technology affords a comfortable distance. It’s simply easier to tell someone you’re angry via email or IM, without having to deal with their reactions in person. So, if you’re an emotionally sensitive person, you might be drawn to social media. This is a judgment-free statement. Our findings don’t weigh in on whether this helps or hinders a person’s social and emotional skills. That is the critical next step in our research. Here is what we know so far:

How we put it to the test. While previous studies ask people to report on very basic aspects of their social media use – like how many hours a week they use social media sites – we did something new. We asked people how they prefer to communicate with others (and what they actually did over the past 6 months) when they need to express emotions like anger or excitement, ask for or give social support during emotionally tough times, and exchange information. For each question, answers could vary from 100% using technology (not including the phone) to 100% using face-to-face interactions. Many people showed a strong face-to-face preference, but just as many showed a strong tech preference.

Then, we asked people to tell us about their emotional lives – emotional highs and lows, empathy for others, personality, and satisfaction with the social support they receive from others. Finally, we recorded EEG (aka “brainwaves”) while they viewed emotional pictures. While EEG doesn’t give us the power to directly access people’s consciousness (Oh, Dennis Quaid, you really had us believing that you could EEG your way into our brains in the 1984 movie Dreamscape), EEG can measure the degree to which our brains are sensitive to different types of emotional information – pleasant, disgusting, erotic, dangerous, and cute, cuddly things. We showed participants everything from sex to kittens, and graves to gore.

The power of EEG, portrayed by the movie Dreamscape (1984). Dennis Quaid is probably NOT looking at pictures of kittens.

Findings. Data analyses are incomplete and are not yet published, so I’ll only discuss the broad strokes of our findings. As I stated at the top, those who prefer to communicate via social media and technology versus face-to-face interactions are sensitive souls: they report feeling more negative emotions (like anxiousness and sadness), are less extroverted, and are less satisfied with the social support they receive from others. On the other hand, they also report feeling more empathic towards others (for example, “I get a strong urge to help when I see someone who is upset” or “it upsets me to see someone being treated disrespectfully”).

Complementing this, EEG findings show that those with a social media/tech preference have stronger brain responses to pictures portraying mortality – graves, sick people, dying loved ones. That is, the brains of folks who prefer social media are more sensitive to pictures that are reminders of death and loss.

This is not about social media causing anything! The popular press often describes research about social media in inaccurate ways – saying that social media caused people to be a certain way (e.g., the idea of Facebook depression). This sounds sexy but is just wrong most of the time. Unless you’ve done experiments that show social media directly change something about people, or you’ve tracked how social media predicts changes in people over time, you cannot even begin to discuss causality.

So what can we discuss? What does this all mean? What it means is that our findings are not about causality, they are descriptive. These results help us to describe the social-emotional profile of people who prefer and use tech-mediated versus face-to-face social interactions – their personalities, goals, strengths, and vulnerabilities. Ultimately, this can help us understand the growing role of social media in our everyday routines, and why, for some, these tools can feel like life boats in the stormy seas of our lives. What remains unclear is whether these life boats are going to bring us to shore or whether we will be lost at sea (ok, this metaphor is getting a little much).

Where are we going with this? Importantly, we have no idea what the long-term costs or benefits of social media are for our sensitive souls. That is where I am really going with this research. I believe we need to track how a tech preference influences us from the cradle to the rocking chair: in our digital natives who are using these tools before they are out of diapers; in adults, who almost can’t remember a time when these tools didn’t exist; and in older adults, who may be discovering the immense world that opens up before them when they use technology to communicate with others.

Cyborgs, Second Brains, and Techno-Lobotomies: Metablog #2

Last week, I had the pleasure of being Freshly Pressed on WordPress.com – that is, I was a featured blog on their home page. As a result, I got more traffic and more interesting comments from people in one day than I have since I began blogging. Thanks, WordPress editors!

I’ve been really excited and inspired by the exchanges I’ve had with others, including the ideas and themes we all started circling around. Most of the dialogue was about a post I made on technology, memory, and creativity. Here, I was interested in the idea that the more we remember the more creative we may be simply because we have a greater amount of “material” to work with. If this is the case, what does it mean that, for many of us, we are using extremely efficient and fast technologies to “outsource” our memory for all sorts of things? – from trivia, schedules, and dates, to important facts and things we want to learn. What does this mean in terms of our potential for creativity and learning, if anything? What are the pros and cons?

I was fascinated by the themes –maybe memes? – that emerged in my dialogue with other bloggers (or blog readers). I want to think through two of them here. I am TOTALLY sure that I wouldn’t have developed and thought through these issues to the same degree – that is, my creativity would have been reduced – without these digital exchanges. Thanks, All.

Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto
Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto

Are We Becoming Cyborgs? The consensus is that – according to most definitions – we already are. A cyborg (short for cybernetic organism) is a being that enhances its own abilities via technology. In fiction, cyborgs are portrayed as a synthesis of organic and synthetic parts. But this organic-synthetic integration is not necessary to meet the criteria for a cyborg. Anyone who uses technology to do something we humans already do, but in an enhanced way is a cyborg. If you can’t function as well once your devices are gone (like if you leave your smart phone at home), then you’re probably a cyborg.

A lot of people are interested in this concept. On an interesting blog called Cyborgology they write: “Today, the reality is that both the digital and the material constantly augment one another to create a social landscape ripe for new ideas. As Cyborgologists, we consider both the promise and the perils of living in constant contact with technology.”

Yet, on the whole, comments on my post last week were not made in this spirit of excitement and promise – rather, there was concern and worry that by augmenting our abilities via technology, we will become dependent because we will “use it or lose it.” That is, if we stop using our brain to do certain things, these abilities will atrophy (along with our brain?). I think that’s the unspoken (and spoken) hypothesis and feeling.

Indeed, when talking about the possibility of being a cyborg, the word scary was used by several people. I myself, almost automatically, have visions of borgs and daleks (look it up non sci-fi geeks ;-)) and devices for data streaming implanted in our brains. Those of us partial to future dystopias might be picturing eXistenZ – a Cronenberg film about a world in which we’re all “jacked in” to virtual worlds via our brain stem. The Wikipedia entry describes it best: “organic virtual reality game consoles known as “game pods” have replaced electronic ones. The pods are attached to “bio-ports”, outlets inserted at players’ spines, through umbilical cords.” Ok, yuck.

There was an article  last month on memory and the notion of the cyborg (thank you for bringing it to my attention, Wendy Edsall-Kerwin). Here, not only is the notion of technological augmentation brought up, but the notion of the crowdsourced self is discussed. This is, I believe, integral to the notion of being a cyborg in this particular moment in history. There is a great quote in the article, from Joseph Tranquillo, discussing the ways in which social networking sites allow others to contribute to the construction of a sense of self: “This is the crowd-sourced self. As the viewer changes, so does the collective construction of ‘you.’”

This suggests that, not only are we augmenting ourselves all the time via technology, but we are defining ourselves in powerful ways through the massively social nature of online life. This must have costs and benefits that we are beginning to grasp only dimly.

I don’t have a problem with being a cyborg. I love it in many ways (as long as I don’t get a bio-port inserted into my spine). But I also think that whether a technological augmentation is analog or digital, we need to PAY ATTENTION and not just ease into our new social landscape like a warm bath, like technophiles in love with the next cool thing. We need to think about what being a cyborg means, for good and for bad. We need to make sure we are using technology as a tool, and not being a tool of the next gadget and ad campaign.

The Second Brain. This got a lot of play in our dialogue on the blog. This is the obvious one we think about when we think about memory and technology – we’re using technological devices as a second brain in which to store memories to which we don’t want to devote our mental resources.

But this is far from a straightforward idea. For example, how do we sift through what is helpful for us to remember and what is helpful for us to outsource to storage devices? Is it just the trivia that should be outsourced? Should important things be outsourced if I don’t need to know them often? Say for example, I’m teaching a class and I find it hard to remember names. To actually remember these names, I have to make a real effort and use mnemonic devices. I’m probably worse at this now than I was 10 years ago because of the increased frequency with which I DON’T remember things in my brain now. So, given the effort it will take, and the fact that names can just be kept in a database, should I even BOTHER to remember my students’ names? Is it impersonal not to do so? Although these are relatively simple questions, they raise, for me, ethical issues about what being a professor means, what relating to students means, and how connected I am to them via my memory for something as simple as a name. Even this prosaic example illustrates how memory is far from morally neutral.

Another question raised was whether these changes could affect our evolution. Thomaslongmire.wordpress.com asked:  “If technology is changing the way that we think and store information, what do you think the potential could be? How could our minds and our memories work after the next evolutionary jump?”

I’ll take an idea from andylogan.wordpress.com as a starting point – he alluded to future training in which we learn how to allocate our memory, prioritize maybe. So, perhaps in the future, we’ll just become extremely efficient and focused “rememberers.” Perhaps we will also start to use our memory mainly for those types of things that can’t be easily encoded in digital format – things like emotionally-evocative memories. Facts are easy to outsource to digital devices, but the full, rich human experience is very difficult to encode in anything other than our marvelous human brains. So if we focus on these types of memories, maybe they will become incredibly sophisticated and important to us – even more so than now. Perhaps we’ll make a practice of remembering those special human moments with extreme detail and mindfulness, and we’ll become very, very good at it. Or, on the other hand, perhaps we would hire “Johnny Mnemonics” to do the remembering for us.

But a fundamental question here is whether there is something unique about this particular technological revolution. How is it different, say, than the advent of human writing over 5,000 years ago? The time-scale we’re talking about is not even a blink in evolutionary time. Have we even seen the evolutionary implications of the shift from an oral to a written memory culture?  I believe there is something unique about the nature of how we interact with technology – it is multi-modal, attention grabbing, and biologically rewarding (yes, it is!) in a way that writing just isn’t. But we have to push ourselves to articulate these differences, and seek to understand them, and not succumb to a doom and gloom forecast. A recent series of posts on the dailydoug  does a beautiful job of this.

Certainly, many can’t see a downside and instead emphasize the fantastic opportunities that a second brain affords us; or at least make the point, as robertsweetman.wordpress.com does, that “The internet/electronic memory ship is already sailing.”

So, where does that leave us? Ty Maxey wonders if all this is just leading to a technolobotomy – provocative term! –  but I wonder if instead we have an amazing opportunity to take these technological advances as a testing ground for us to figure out as a society what we value about those capacities that, for many of us, are what we think make us uniquely human.

So Long Ago I Can’t Remember: Memory, Technology, and Creativity

I recently read an interesting blog through Scientific American by the writer Maria Konnikova. In it, she writes about how memorization may help us be more creative. This is a counterintuitive idea in some ways because memorizing information or learning something by rote seems the antithesis of creativity. In explanation, she quotes the writer Joshua Foer, the winner of the U.S. memory championship, from his new book: “I think the notion is, more generally, that there is a relationship between having a furnished mind (which is obviously not the same thing as memorizing loads of trivia), and being able to generate new ideas. Creativity is, at some level, about linking disparate facts and ideas and drawing connections between notions that previously didn’t go together. For that to happen, a human mind has to have raw material to work with.”

This makes perfect sense. How can we create something new, put things together that have never before been put together, if we don’t really know things “by heart”? This makes me think of the great classical musicians. Great musicians know the music so well, so deeply that you both play it perfectly in terms of the intention of the composer AND you are able to add that ineffable creative flair. It’s only when you’ve totally mastered and memorized the music that you can put your own stamp on it and it becomes something special. Otherwise, it’s robotic.

These issues are incredibly relevant to how human memory is adapting to new information technologies. Research has recently shown that when we think we can look up information on the internet, we make less effort and are less likely to remember it. This idea is referred to as “transactive memory” – relying on other people or things to store information for us. I think of it as the External Second Brain phenomenon – using the internet and devices as our second brain so that we don’t have to hold all the things we need to know in our own brain. As a result, how much do we actually memorize anymore? I used to know phone numbers by heart – now, because they are all in my phone’s address book, I remember maybe five numbers and that’s it. How about little questions I’m wondering about, like: When was the first Alien movie released (okay, I saw Prometheus last week)? The process of getting the information is – 1. Look it up; 2. Say, “ah, right, interesting”; 3. Then with a 75% probability in my case forget it within a week. Information is like the things we buy at a dollar store – easily and cheaply obtained, and quickly disposed of.

A colleague in academia once told me about an exercise his department made their graduate students go through in which they presented their thesis projects – the information they should know the best, be masters of really – using an old-school flip board with paper and sharpies. Without the help of their PowerPoint slides and notes, they could barely describe their projects. They had not internalized it or memorized it because they didn’t need to. It was already in the slides. If they didn’t know something about their topic, they could just look it up with lightening speed. Only superficial memorization required.

In addition, the process of relating to and transcribing information has changed. Today, if students need to learn something, they can just cut and paste information from the internet or from documents on their computers. They often don’t need to type it in their own words, or even type it at all. They miss a key opportunity to review and understand what they are learning. We know that things are remembered better when they are effortfully entered into memory – through repetition, and using multiple modalities like writing it out and reading it. If you quickly and superficially read something, like we do all the time when we are on the internet or zooming from email to website to app, then you cannot put that information into memory as efficiently. For most of us, it just won’t stick.

On the other hand, shouldn’t the vast amounts of information we have at our fingertips aid us in our creative endeavors? Haven’t our world and the vision we have of what is possible expanded? Couldn’t this make us more creative? Perhaps, by delegating some information to our external second brains, we are simply freed up to focus our minds on what is important, or on what we want to create (credit to my student Lee Dunn for that point).

Also, I think many of us, me included, know that we NEED help negotiating the information glut that is our lives. We CAN’T keep everything we need to know memorized in our brains, so we need second brains like devices and the internet to help us. I don’t think we can or should always relate deeply to and memorize all the information we have to sift through. It is a critical skill to know what to focus on, what to skim, and what to let go of. This is perhaps the key ability of the digital age.

I also appreciate all the possibilities I have now that I would NEVER have had before were it not for the incredible breadth and speed of access to information. As a scientist, this has transformed my professional life for the good and the bad – along with opportunities comes the frequently discussed pressure to always work. But give up my devices? I’d rather give you my left arm (75% joking).

As a child developmentalist and psychologist, I feel that we have to acknowledge that these shifts in how we learn, remember, and create might start affecting us and our children – for good and bad – sooner than we think. This isn’t just the current generation saying “bah, these new fangled devices will ruin us (while shaking wrinkly fist)!!!” I think these changes are evolutionarily new, all-pervasive, and truly different. We as a society have to contend with these changes, our brains have to contend with these changes, and our children are growing up in a time in which memory as we think of it may be a thing of the past.