Can Digital Mental Health Tools Save Psychology?

After decades of research on mental health treatments for conditions ranging from schizophrenia to depression, from anxiety to autism, our track record remains poor. For example, anxiety disorders alone will affect over 90 million people in the lifetime – in the U.S. alone. That’s approaching a third of our population. Yet, only a small fraction of us receive effective, long-lasting treatment. Thus, while we mental health professionals do much good and have some excellent, evidence-based treatments, we also know that, on balance, we are far from doing enough. We are failing.

I believe that there are many reasons for this failure. Psychological disorders are incredibly complex, with diverse and wide-ranging causes and manifestations that vary extremely from person to person. So we have an unbelievably tough problem to solve. But in addition, I believe that there is a two-part “recipe for disaster” that has put up additional barriers to the development of effective treatments:

  1. The stigma of mental illness
  2. Professionals minimize the importance of making treatments acceptable to the individual

The Stigma of Mental Illness

If you type “stigma definition” into Google, here is what comes up:

noun: stigma; plural noun: stigmata; plural noun: stigmas

  1. a mark of disgrace associated with a particular circumstance, quality, or person.

“the stigma of mental disorder”

synonyms: shamedisgracedishonorignominyopprobriumhumiliation, (bad) reputation
antonyms: honorcredit

It is no coincidence that mental illness is the paradigmatic example given by the dictionary. It is one of the most pervasive and persistent of the social stigmas. If we think about other sources of stigma – like the stigma suffered by those diagnosed with HIV/AIDS in the 80’s and 90’s and beyond – the stigma of mental illness is especially striking because mental illness is not contagious. But we fear it as if it were. The mentally ill are NOT more likely to commit violence, and yet, this is what many people fear. Take the media frenzy following the Sandy Hook Elementary School tragedy as an example of this type of assumption.

As long as mental illness remains a sign of disgrace and dishonor, people will avoid seeking professional help because it makes them feel broken – perhaps beyond repair.

Professionals Minimize the Importance of Making Treatments Acceptable to the Individual

There is another issue exacerbating the barrier represented by the stigma of mental illness. This barrier is that we scientists and practitioners, in our education, are socialized away from figuring out how to provide individuals with services they need in a way that they want – something that is obvious to any product- or service-oriented industry. Instead, we are taught to believe that we know best because we use the tools of science to develop the most efficacious treatments. The implicit narrative is: “We are the experts! We have figured out the best “medicine” for you, now take it!” This arrogance often keeps us from seeing that if we develop treatments that are too onerous or if treatments are embedded in a culture of disgrace and stigma, then we have failed to solve the problem. We have failed to meet “consumer needs.”

This is of course an overstatement and many mental health professionals actively fight against these attitudes. But there is a grain of truth here. Anyone on either side of the mental health fence – both professionals and patients – is familiar with this feeling, whether it’s acknowledged or swept under the rug.

How Digital Mental Health Tools Can Disrupt Stigma and Increase Acceptability of Treatmentspersonal zen achievement

In addition to breaking down barriers to effective, affordable, and accessible mental health treatment, I believe that digital – in particular mobile – mental health tools can be harnessed to have profound and lasting disruptive effects on the stigma of mental illness and on our failure to make acceptability of treatments a top priority. Here are five ways I believe digital mental health tools might just save Psychology:

If treatments are administered on a device, they are normalized 

If we are successful in attempts to embed evidence-based treatments into mobile and gamified formats, I believe we can profoundly reduce the experience of and appearance of stigma. Devices have become our filters of information, our gateways to the world, sources of fun, and our hubs of connection. The actions we perform on our devices, by association, feel more “normal,” more connected to every aspect of our lives and to others. This creates a process of validation rather than shaming. By putting mental health treatments on devices, we might just be normalizing these treatments and creating positive emotional contagion – treatments become “good” by association with the devices we love. And if we gamify interventions, these effects could be strengthened even further.

Self-curating our mental health

With digital mental health tools, accessibility is exponentially increased. For example, with mobile mental health apps, you have affordable help “in the palm of your hand.” This ability to curate creates a sense of empowerment. This is “self-help” in a very real sense. With this high level of accessibility and empowerment, many of us will avail ourselves of interventions to reduce negative experiences and states.  In addition, with the proliferation of digital tools to PROMOTE positive outcomes and to reach our fullest potential, we may find on the societal level that this positive focus is just as helpful – if not more so – as the focus on preventing negative outcomes. This attitude of promoting the positive is an excellent antidote to stigma. Who couldn’t benefit from promoting more of what is positive about oneself and how one lives life?

Digital health technology provides powerful platforms for community building

This is readily apparent. With greater community building comes a sense of belonging and a reduction of isolation. But digital community building also provides opportunities for effective advocacy. Of course, many such groups exist, but excellent digital mental health tools with a social media component could accelerate the creation of such systems, leveraging all the power of an individual’s full social network.

The profit motive will fuel innovation and valuing of consumer perspectives

Once interventions enter the digital and mobile technology world, the accompanying consumer focus (read $$$) will force the development of consumer-oriented products. Users have power in this domain. So, if interventions are onerous, boring, or non-intuitive, people will simply not use them. User stats will do the rest – no one will put resources into a product that people won’t use. Better ones WILL be developed.

Digital mental health increases opportunities for gamification

The gamification of mental health is beginning. At this point, we are taking baby steps, since we have an absence of a strong empirical base; in other words, there is precious little research showing that computerized games have a direct, positive influence on mental illness or on the promotion of mental wellness. But we are only in the earliest, exciting stages of this revolution. As I’ve written elsewhere, I don’t think all treatments should be computerized or gamified, nor do I think face-to-face therapy is obsolete – far from it. But I believe that if fun can be combined with powerful treatment technologies, then we can in a single step make profound progress in erasing the stigma of mental illness and creating treatments that people will truly want to use.

Mental Health on the Go

My forthcoming research paper reporting on a mobile app that gamifies an emerging treatment for anxiety and stress  – a paper that hopefully will be officially out in the next month or so – is starting to be discussed in the media, including the Huffington Post. Thank you Wray Herbert for such great coverage of the study.

 

 

Connectedness and the Call of Anxiety

A study suggests that more frequent mobile phone use might make you more anxious. This could reflect the burden of constant social connectedness, or even nomophobia –  the “no-mobile-phone phobia” of losing connection. But we shouldn’t forget that this is a clear chicken and the egg question….. Are devices making us anxious, or do people who are already anxious just use devices more frequently?

 

 

My Personal Zen

iPhone Screenshot 1

To follow up on my posts about gamifying mental health, I’m excited to announce that Personal Zen, my science-based (but still fun) stress-reduction game, is ready to share with the world! It’s free in the App Store, so please download it and check it out.

Research from my lab supports its efficacy to prevent and reduce stress and anxiety. Yet, as a game, it’s a beta version and our goal is to get any and all feedback to make it more fun, user-friendly, and effective. So please try it and let me know what you think (either via this blog or the app, which has a “send feedback” button in the menu).

My larger goal is to develop a suite of mobile games for health based on sound scientific principles. As we increasingly curate our own emotional and mental wellness, I think it’s crucial that we have scientifically-supported options to chose among. Because stress reduction is key to wellness, that’s where I’m starting with Personal Zen.

Here’s how it works (as I wrote in the App Store): When we get anxious or stressed, we pay too much attention to the negatives and have less ability to see the positives in life. These habits of attention reduce our ability to cope effectively with stress and can create a vicious cycle of anxiety. Personal Zen helps to short-circuit these habits and frees you up to develop a more flexible and positive focus. You can reduce your stress and anxiety in as little as one sitting, and the more you play, the more you strengthen well-being and vaccinate yourself against the negative effects of stress.

Essentially, the app works by helping people build new habits of paying attention to the world. But building new habits takes some practice, so we recommend spending time with it every week. I love using it on the NYC subways, and it’s truly “snackable” in that using it a few minutes at a time reaps benefits.

If you’re interested in any of the scientific background on the app, I’m happy to share both specific take-home messages and data.

Blocks and books better than electronic games for your toddler?

Thanks for this post, Dona Matthews! 

Blocks and books better than electronic games for your toddler?

I think one important take-home message is that we need to think through how electronic toys could be designed to better foster communication  and creativity.

This is Your Brain on Technology?

There is a lot of polarized dialogue about the role of communication technologies in our lives – particularly mobile devices and social media: Technology is either ruining us or making our lives better than ever before. For the worried crowd, there is the notion that these technologies are doing something to our brain; something not so good – like making us stupid, numbing us, weakening social skills. It recalls the famous anti-drug campaign: This is your brain on drugs. In the original commercial, the slogan is accompanied by a shot of an egg sizzling on a skillet.

So, this is your brain on technology? Is technology frying our brain? Is this a good metaphor?

One fundamental problem with this metaphor is that these technologies are not doing anything to us; our brain is not “on” technology. Rather, these technologies are tools. When we use tools, we change the world and ourselves. So, in this sense, of course our brain is changed by technology. But our brain is also changed when we read a book or bake a pie. We should not accord something like a mobile device a privileged place beyond other tools.  Rather, we should try to remember that the effects of technology are a two-way street: we choose to use tools in a certain way, which in turn influences us.

We would also do well to remember that the brain is an amazing, seemingly alchemical combination of genetic predispositions, experiences, random events, and personal choices. That is, our brains are an almost incomprehensibly complex nature-nurture stew.  This brain of ours is also incredibly resilient and able to recover from massive physical insults. So, using a tool like a mobile device isn’t going to “fry” our brain. Repeated use of any tool will shape our brain, surely, but fry it? No.

So, “this is your brain on technology” doesn’t work for me.

The metaphor I like better is to compare our brains “on technology” to a muscle. This is a multi-faceted metaphor. On one hand, like a muscle, if you don’t use your brain to think and reason and remember, there is the chance that you’ll become less mentally agile and sharp. That is, if you start using technology at the expense of using these complex and well-honed skills, then those skills will wither and weaken. It’s “use it or lose it.”

On the other hand, we use tools all the time to extend our abilities and strength –whether it’s the equipment in a gym that allows us to repeatedly use muscles in order to strengthen them; or whether it’s a tool that takes our muscle power and amplifies it (think of a lever). Similarly, by helping us do things better, technology may serve to strengthen rather than weaken us.

It is an open question whether one or both of these views are true – and for what people and under what conditions. But I believe that we need to leave behind notions of technology “doing” things to our brains, and instead think about the complex ways in which our brains work with technology – whether that technology is a book or a mobile device.

 

Mission Impossible?: Fitting the Techno-Social Landscape of Our Lives into Neat Little Boxes

What can science really tell us about the complex roles of social media, technology, and computer-mediated communication in our social lives? It’s a question I’ve been increasingly asking myself.  As a scientist, my job is to deconstruct very complex phenomena into understandable components, put things in neat, little, over-simplified boxes so that we can actually begin to understand something in systematic, replicable ways. Don’t get me wrong. I love science and think the tools of science are still the best we have available to us. But there are also limitations to these tools.

In particular, I think we haven’t even begun to wrap our heads around how all the technologies we use to augment our social lives work together to create a unique social experience. For example, the social context of texting is very different from that of Facebook which is very different from the social context of blogging, etc,… Simply studying the number of hours a given person uses social media or some type of communication technology is not going to tell you a lot about that person’s life. A given person may be on Facebook 12 hours a week, avoid texting and talking on the phone,  listen to all their music on Spotify, troll YouTube videos  5 hours a week, video chat 12 times a week, and the list goes on. It seems to me that the experience of all these media, TOGETHER, makes up our full technosocial landscape; the gestalt of our lives.

So how do we start to understand each person’s unique profile of social technology use? One difference that could matter is that some of us are using technology that facilitates direct social connection and social networking (e.g., Facebook) whereas others are using technology that are more like digital analogs to the phone (e.g., texting). It probably also matters whether these technology augment or take the place of face-to-face interactions. There is an interesting post on the dailydoug blog that includes discussion of these kinds of differences.

I’m also starting to think it’s not so much the explicit social interactions we have via technology (e.g., commenting on someone’s status update on Facebook) but rather, it’s the degree to which we use technology to transport ourselves into a connected state of consciousness.  I actually think this applies to any technology – we probably all have used books, music, TV and other things to transport our consciousness and feel more connected to something bigger than ourselves. But in the case of mobile technology and social media, the nature of the game has changed in a fundamental way – communication is completely portable, deeply social, extremely fast, and set up in such a way that we feel “disconnected” if we don’t constantly check our devices.

So, how do we unpack the complex profiles of our technology use and the key role these technologies play in our sense of connection with others? What are the patterns? Are there patterns that are problematic or helpful in terms of making us all happier (and isn’t that the only thing that really matters?)? If a pattern is problematic, can we tweak it so that it becomes healthy? Are there optimal patterns for certain types of people? How can we take into account that while two people might both use Facebook 3 hours a day, they might respond to this experience completely differently (e.g., some people feel more depressed  after using Facebook because of all the social comparisons that make us feel lacking; many others just feel happy and more connected)? Are there certain combinations of technology use and face-to-face time that allow people to feel connected in a way that enriches without the burden of too many forms of communication to keep up with? I think technology burden is a deepening issue, and that many of us are starting to figure out the costs and benefits of our digitally-connected lives.

Why do I think this is so hard for Science to examine? Because it is very difficult to scientifically study non-linear phenomena – those processes that are not in the format of A influences B which in turn influences C. Instead, when you have individuals, each with a unique profile of technology use that makes up our social lives, along with all the subjective experiences and feelings that go along with it, you have a really interesting multi-level dynamic system. Sometimes when you deconstruct a system to understand its separate parts, you lose the whole. You know, the old, “the whole is greater than the sum of its parts.”

In answer to my question, I don’t think this is a mission impossible. But I think it’s a mission that is incredibly rich and challenging. I’m up for trying and hope that I and others can find a way to honor these complexities by finding scientifically-valid “boxes” and approaches which are good enough to hold them.

Cyborgs, Second Brains, and Techno-Lobotomies: Metablog #2

Last week, I had the pleasure of being Freshly Pressed on WordPress.com – that is, I was a featured blog on their home page. As a result, I got more traffic and more interesting comments from people in one day than I have since I began blogging. Thanks, WordPress editors!

I’ve been really excited and inspired by the exchanges I’ve had with others, including the ideas and themes we all started circling around. Most of the dialogue was about a post I made on technology, memory, and creativity. Here, I was interested in the idea that the more we remember the more creative we may be simply because we have a greater amount of “material” to work with. If this is the case, what does it mean that, for many of us, we are using extremely efficient and fast technologies to “outsource” our memory for all sorts of things? – from trivia, schedules, and dates, to important facts and things we want to learn. What does this mean in terms of our potential for creativity and learning, if anything? What are the pros and cons?

I was fascinated by the themes –maybe memes? – that emerged in my dialogue with other bloggers (or blog readers). I want to think through two of them here. I am TOTALLY sure that I wouldn’t have developed and thought through these issues to the same degree – that is, my creativity would have been reduced – without these digital exchanges. Thanks, All.

Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto
Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto

Are We Becoming Cyborgs? The consensus is that – according to most definitions – we already are. A cyborg (short for cybernetic organism) is a being that enhances its own abilities via technology. In fiction, cyborgs are portrayed as a synthesis of organic and synthetic parts. But this organic-synthetic integration is not necessary to meet the criteria for a cyborg. Anyone who uses technology to do something we humans already do, but in an enhanced way is a cyborg. If you can’t function as well once your devices are gone (like if you leave your smart phone at home), then you’re probably a cyborg.

A lot of people are interested in this concept. On an interesting blog called Cyborgology they write: “Today, the reality is that both the digital and the material constantly augment one another to create a social landscape ripe for new ideas. As Cyborgologists, we consider both the promise and the perils of living in constant contact with technology.”

Yet, on the whole, comments on my post last week were not made in this spirit of excitement and promise – rather, there was concern and worry that by augmenting our abilities via technology, we will become dependent because we will “use it or lose it.” That is, if we stop using our brain to do certain things, these abilities will atrophy (along with our brain?). I think that’s the unspoken (and spoken) hypothesis and feeling.

Indeed, when talking about the possibility of being a cyborg, the word scary was used by several people. I myself, almost automatically, have visions of borgs and daleks (look it up non sci-fi geeks ;-)) and devices for data streaming implanted in our brains. Those of us partial to future dystopias might be picturing eXistenZ – a Cronenberg film about a world in which we’re all “jacked in” to virtual worlds via our brain stem. The Wikipedia entry describes it best: “organic virtual reality game consoles known as “game pods” have replaced electronic ones. The pods are attached to “bio-ports”, outlets inserted at players’ spines, through umbilical cords.” Ok, yuck.

There was an article  last month on memory and the notion of the cyborg (thank you for bringing it to my attention, Wendy Edsall-Kerwin). Here, not only is the notion of technological augmentation brought up, but the notion of the crowdsourced self is discussed. This is, I believe, integral to the notion of being a cyborg in this particular moment in history. There is a great quote in the article, from Joseph Tranquillo, discussing the ways in which social networking sites allow others to contribute to the construction of a sense of self: “This is the crowd-sourced self. As the viewer changes, so does the collective construction of ‘you.’”

This suggests that, not only are we augmenting ourselves all the time via technology, but we are defining ourselves in powerful ways through the massively social nature of online life. This must have costs and benefits that we are beginning to grasp only dimly.

I don’t have a problem with being a cyborg. I love it in many ways (as long as I don’t get a bio-port inserted into my spine). But I also think that whether a technological augmentation is analog or digital, we need to PAY ATTENTION and not just ease into our new social landscape like a warm bath, like technophiles in love with the next cool thing. We need to think about what being a cyborg means, for good and for bad. We need to make sure we are using technology as a tool, and not being a tool of the next gadget and ad campaign.

The Second Brain. This got a lot of play in our dialogue on the blog. This is the obvious one we think about when we think about memory and technology – we’re using technological devices as a second brain in which to store memories to which we don’t want to devote our mental resources.

But this is far from a straightforward idea. For example, how do we sift through what is helpful for us to remember and what is helpful for us to outsource to storage devices? Is it just the trivia that should be outsourced? Should important things be outsourced if I don’t need to know them often? Say for example, I’m teaching a class and I find it hard to remember names. To actually remember these names, I have to make a real effort and use mnemonic devices. I’m probably worse at this now than I was 10 years ago because of the increased frequency with which I DON’T remember things in my brain now. So, given the effort it will take, and the fact that names can just be kept in a database, should I even BOTHER to remember my students’ names? Is it impersonal not to do so? Although these are relatively simple questions, they raise, for me, ethical issues about what being a professor means, what relating to students means, and how connected I am to them via my memory for something as simple as a name. Even this prosaic example illustrates how memory is far from morally neutral.

Another question raised was whether these changes could affect our evolution. Thomaslongmire.wordpress.com asked:  “If technology is changing the way that we think and store information, what do you think the potential could be? How could our minds and our memories work after the next evolutionary jump?”

I’ll take an idea from andylogan.wordpress.com as a starting point – he alluded to future training in which we learn how to allocate our memory, prioritize maybe. So, perhaps in the future, we’ll just become extremely efficient and focused “rememberers.” Perhaps we will also start to use our memory mainly for those types of things that can’t be easily encoded in digital format – things like emotionally-evocative memories. Facts are easy to outsource to digital devices, but the full, rich human experience is very difficult to encode in anything other than our marvelous human brains. So if we focus on these types of memories, maybe they will become incredibly sophisticated and important to us – even more so than now. Perhaps we’ll make a practice of remembering those special human moments with extreme detail and mindfulness, and we’ll become very, very good at it. Or, on the other hand, perhaps we would hire “Johnny Mnemonics” to do the remembering for us.

But a fundamental question here is whether there is something unique about this particular technological revolution. How is it different, say, than the advent of human writing over 5,000 years ago? The time-scale we’re talking about is not even a blink in evolutionary time. Have we even seen the evolutionary implications of the shift from an oral to a written memory culture?  I believe there is something unique about the nature of how we interact with technology – it is multi-modal, attention grabbing, and biologically rewarding (yes, it is!) in a way that writing just isn’t. But we have to push ourselves to articulate these differences, and seek to understand them, and not succumb to a doom and gloom forecast. A recent series of posts on the dailydoug  does a beautiful job of this.

Certainly, many can’t see a downside and instead emphasize the fantastic opportunities that a second brain affords us; or at least make the point, as robertsweetman.wordpress.com does, that “The internet/electronic memory ship is already sailing.”

So, where does that leave us? Ty Maxey wonders if all this is just leading to a technolobotomy – provocative term! –  but I wonder if instead we have an amazing opportunity to take these technological advances as a testing ground for us to figure out as a society what we value about those capacities that, for many of us, are what we think make us uniquely human.

So Long Ago I Can’t Remember: Memory, Technology, and Creativity

I recently read an interesting blog through Scientific American by the writer Maria Konnikova. In it, she writes about how memorization may help us be more creative. This is a counterintuitive idea in some ways because memorizing information or learning something by rote seems the antithesis of creativity. In explanation, she quotes the writer Joshua Foer, the winner of the U.S. memory championship, from his new book: “I think the notion is, more generally, that there is a relationship between having a furnished mind (which is obviously not the same thing as memorizing loads of trivia), and being able to generate new ideas. Creativity is, at some level, about linking disparate facts and ideas and drawing connections between notions that previously didn’t go together. For that to happen, a human mind has to have raw material to work with.”

This makes perfect sense. How can we create something new, put things together that have never before been put together, if we don’t really know things “by heart”? This makes me think of the great classical musicians. Great musicians know the music so well, so deeply that you both play it perfectly in terms of the intention of the composer AND you are able to add that ineffable creative flair. It’s only when you’ve totally mastered and memorized the music that you can put your own stamp on it and it becomes something special. Otherwise, it’s robotic.

These issues are incredibly relevant to how human memory is adapting to new information technologies. Research has recently shown that when we think we can look up information on the internet, we make less effort and are less likely to remember it. This idea is referred to as “transactive memory” – relying on other people or things to store information for us. I think of it as the External Second Brain phenomenon – using the internet and devices as our second brain so that we don’t have to hold all the things we need to know in our own brain. As a result, how much do we actually memorize anymore? I used to know phone numbers by heart – now, because they are all in my phone’s address book, I remember maybe five numbers and that’s it. How about little questions I’m wondering about, like: When was the first Alien movie released (okay, I saw Prometheus last week)? The process of getting the information is – 1. Look it up; 2. Say, “ah, right, interesting”; 3. Then with a 75% probability in my case forget it within a week. Information is like the things we buy at a dollar store – easily and cheaply obtained, and quickly disposed of.

A colleague in academia once told me about an exercise his department made their graduate students go through in which they presented their thesis projects – the information they should know the best, be masters of really – using an old-school flip board with paper and sharpies. Without the help of their PowerPoint slides and notes, they could barely describe their projects. They had not internalized it or memorized it because they didn’t need to. It was already in the slides. If they didn’t know something about their topic, they could just look it up with lightening speed. Only superficial memorization required.

In addition, the process of relating to and transcribing information has changed. Today, if students need to learn something, they can just cut and paste information from the internet or from documents on their computers. They often don’t need to type it in their own words, or even type it at all. They miss a key opportunity to review and understand what they are learning. We know that things are remembered better when they are effortfully entered into memory – through repetition, and using multiple modalities like writing it out and reading it. If you quickly and superficially read something, like we do all the time when we are on the internet or zooming from email to website to app, then you cannot put that information into memory as efficiently. For most of us, it just won’t stick.

On the other hand, shouldn’t the vast amounts of information we have at our fingertips aid us in our creative endeavors? Haven’t our world and the vision we have of what is possible expanded? Couldn’t this make us more creative? Perhaps, by delegating some information to our external second brains, we are simply freed up to focus our minds on what is important, or on what we want to create (credit to my student Lee Dunn for that point).

Also, I think many of us, me included, know that we NEED help negotiating the information glut that is our lives. We CAN’T keep everything we need to know memorized in our brains, so we need second brains like devices and the internet to help us. I don’t think we can or should always relate deeply to and memorize all the information we have to sift through. It is a critical skill to know what to focus on, what to skim, and what to let go of. This is perhaps the key ability of the digital age.

I also appreciate all the possibilities I have now that I would NEVER have had before were it not for the incredible breadth and speed of access to information. As a scientist, this has transformed my professional life for the good and the bad – along with opportunities comes the frequently discussed pressure to always work. But give up my devices? I’d rather give you my left arm (75% joking).

As a child developmentalist and psychologist, I feel that we have to acknowledge that these shifts in how we learn, remember, and create might start affecting us and our children – for good and bad – sooner than we think. This isn’t just the current generation saying “bah, these new fangled devices will ruin us (while shaking wrinkly fist)!!!” I think these changes are evolutionarily new, all-pervasive, and truly different. We as a society have to contend with these changes, our brains have to contend with these changes, and our children are growing up in a time in which memory as we think of it may be a thing of the past.