This talk by developer, entrepreneur, speaker, and social critic Maciej Ceglowski is a must-read primer for _everyone_ on the ethics of digital technology.
Speaking at the Personal Democracy Forum (PDF) 2016 was one of those paradigm-shifting conference experiences for me. Before PDF, I tended to hear technophilic, almost Pollyannaish narratives about how technology can make our lives- and our civic lives – better. I was clearly behind the times because I now see the narrative shifting and morphing into a much more challenging, questioning viewpoint that might be best described by the saying “keep your friends close, but keep your enemies closer.”
In almost every talk I heard, technology and the digital economy was described as a double-edged sword, as a way to ignite change, but with high potential costs, and full of booby traps. Those who create technology? A mixed bag at best. Anil Dash didn’t mince words when he called the technocrats and Silicon Valley billionaires liars and the new robber barons. Kentaro Toyama compared the digital economy to The Matrix, in which our personal data is the lifeblood of same Silicon Valley billionaire evil robot overlords.
I have to admit that I take grim pleasure in the aptness of these metaphors, and have uttered identical words myself. However, it is also clear that these ideas are polarizing and, like extremism in politics, privilege emotions above logic to drive more fractious and divisive discourse. Luna Malbroux’s hilarious talk about “EquiTable,” a faux app she developed to create dialogue about social justice and equity, is a nice example of how to break away from bitter recriminations and instead to use humor as a powerful weapon for change.
But if technology is a very sharp double-edged sword, how do we wield it without cutting ourselves? How do we, as Yvette Alberdingk Thijm described in her talk about using technology as civic witnesses, harness technology for good without allowing others to use it against us.
Keep your friends close…
PDF yielded many ideas and solutions. I mention only a few below (including mine). I was particularly interested in those ideas and solutions demanding that technology serve humanistic goals and that the well-being of people be part and parcel of how we design and build technology. To do this, we have to open our eyes and take a cold, hard look at how our romance with technology has caused us to take our hands off the wheel (no pun with driverless cars intended).
My talk (text can be found here) centered on technology and mental health. I argued that the psychological and emotional nature of the tech we build is not peripheral or ancillary – it is fundamental to shaping how we use tech for healing. Right not, technology and digital culture is precisely and relentlessly designed to high jack our attention and our emotional brains for the economic benefit of its creators – this is the basis of the attention economy. To gather, mine, and sell our personal data, technology needs to be addictive, keeping us looking, clicking, buying, eyeballs on the screen, swiping, checking, clutching our devices, hoping to hear the next best thing, to feel connected, soothed, and understood. This is counter to health promotion, and creates imbalance instead of balance, weakness instead of strength. The notion that technology is designed to high jack our brains was beautifully and compelling described in a blog post just a few days after PDF by Tristan Harris.
I ended my talk with a call to action, that we must reclaim the technology culture to serve and amplify humanity and well-being, rather than serve the attention economy. We must further anchor this new culture in key values, including the value that our attention is sacred and valuable, not just the coin of the realm. We must own and be responsible for how we spend our precious attention.
Sherry Turkle observed how our excitement over the rapid pace of technological advances makes us forget some fundamental, common-sense things we know about life. For example, after research suggesting that self-reported declines in empathy among millennials could be caused by growing use of social media and digital communication, one researcher’s solution was to build an “empathy app.” Why would we ever think that technology could make us more empathic, that the thing that might have caused declines in empathy could also be the solution? Dr. Turkle described how many aspects of digital technology actually allow us to effectively hide from the challenges of feeling and expressing emotions in our relationships, to “sidestep physical presence” and seek “frictionless relationships.” Solution – we need to reclaim common sense and realize that we are the empathy app, as Dr. Turkle quipped.
danah boyd called our attention to the immense ethical disconnect in how the digital infrastructure of our civic lives – code – is constructed. This is an industry in “perpetual beta” and thus there are few if any standards, audits, or inspections of code. There also is little consideration of the resources taken up to maintain the immense glut of data generated every day, and little awareness of how bias and inaccuracy are built into data analytics. These questions are of the utmost importance because an increasing number of decisions in our personal and civic lives are being made based on algorithms and digital profiling. She exhorts us to be careful of how and what we code.
…but keep your enemies closer
As in everything, knowledge is power. I felt that we at PDF, speakers, participants, and audience alike, implicitly but universally agreed to keep our eyes open, to look our crush, technology, in the face and see that she may not be on our side anymore but to hope that it’s not too late. Technology is empowering, BUT…. We all agreed to spend more time on the “buts,” as well as on the when, how, and under what conditions we can reclaim technology for humanity. In his PDF talk, Kentaro Toyama evoked the great Isaac Asimov and the First Law of Robotics from Asimov’s “I, Robot” (A robot may not injure a human being or, through inaction, allow a human being to come to harm). In Asimov’s universe, the powers of technology are at their fundamental core designed and harnessed for the benefit of people. I believe that we must and can insist that our technology conform to this higher standard, and that with this as a guiding light, we can wield the double-edged sword of technology for more good than ill.
“Calming the Politics of Fear: Technology and the Anxious Brain” is my talk from Personal Democracy Forum 2016 (June 10, 2016), adapted here for a written format. This talk was part of a set of talks entitled “Tools We Need.” I argue that using technology in the service of health is a very sharp, double-edged sword, and that we must reclaim technology culture to serve and amplify humanity and well-being, rather than serve the digital economy. The video of the talk is available here.
I became a psychologist and a researcher because I wanted to help people overcome problems like anxiety and depression. But I quickly discovered that no one likes you when you are a mental health professional. Psychologists pry into people’s minds and tell you it’s your mother’s fault. Psychiatrists prescribe you drugs with terrible side effects and that emotionally numb you. It’s no coincidence that Hannibal Lector is a psychiatrist.
And that’s when I got it. We psychologists and psychiatrists have profoundly failed people. We have failed to give people the treatments they need – and instead give people treatments that are too expensive, too time consuming, too hard to access, and perhaps most importantly, deeply stigmatizing. Largely because of us, people fear that their hearts and minds will never heal and that they will continue to feel broken inside.
I believe that digital technology can offer us some unique ways out of this mess, and provide tools for both professionals and each of us as individuals to heal problems like anxiety and depression.
But I also believe that using technology in the service of health is a very sharp, double-edged sword, with high potential costs as well as benefits.
In my research lab, we study things called cognitive biases – invisible habits of thinking and paying attention that intensify and even cause anxiety, depression, and addiction. I’ve translated this research into digital techniques that are designed to short circuit cognitive biases.
Let me explain cognitive biases by conducting a little experiment. Please fix your eyes on the screen. [[The following picture flashed up on the screen for 2 seconds]]
How many of you saw the angry face? How many didn’t? The results of our experiment?: Decades of research tell us that people who tend to be anxious or stressed detect that angry face more quickly, and pay attention to it longer and intensely than people who are relatively less anxious and stressed.
This preference to pay attention to and prioritize the negative is called the threat bias. And here’s the kicker. The threat bias piggybacks on one of the triumphs of evolution – the ability to quickly and automatically notice danger, which in turn triggers us to fight or take flight to deal with the danger.
But the threat bias highjacks and skews this evolutionary advantage. It acts as an unconscious information filter, an imbalance in what we pay attention to that makes us actually prefer and prioritize threat and negativity at the expense of the positive. When the threat bias becomes a rigid habit of looking at the world, it puts our fight/flight response on a hair trigger, and sky-rockets our feelings of stress and anxiety – We see monsters in the closet even when they’re not there.
For example, imagine you’re giving a talk, like I am now, and there is a smart audience in front of you, bright lights beaming down. If I had an amped-up threat bias, I would very quickly and intensely notice that there is this one person in the audience who is frowning, shaking his or her head, maybe falling asleep. I will fail to notice all the interested and smiling faces in the audience, and get stuck on this person. The natural result – I feel more anxious and stressed, I am on the look-out for further negative information, and I ignore positive evidence that I’m doing a good job.
In this way, the threat bias drives the vicious cycle of stress and anxiety, takes up mental bandwidth, and puts us at a disadvantage when there is no real danger to face, when the monsters in the closet are only in our mind.
Now, this threat bias doesn’t sound so great. Not great at all. But I love the threat bias and other cognitive biases. That is because there is an empowering message hidden in the idea of cognitive biases. Biases are essentially habits. When we have a bad habit, we are not broken inside – to change, we just need to learn a new habit.
So I have spent a good part of my 20-year career studying how we can derail cognitive biases like the threat bias, learn new habits to heal the anxious brain, and translate these techniques into a digital format.
Over these 20 years as a researcher, I’ve done all the things that a researcher is supposed to do, and enjoyed the process: received grants, run dozens of studies, published over fifty scientific papers on everything from the emotional lives of children to the neuroscience of the anxious brain, became a full, tenured professor at the City University of New York, where I founded the Emotion Regulation Lab, The Center for Stress, Anxiety and Resilience, and the Center for Health Technology and Wellness.
But I only really began to make progress and question how my research on cognitive biases was making a difference when I was pregnant with my daughter. I was talking to my husband about how I felt stuck, and that maternity leave was going to be my chance to think outside the box and he says, “Why don’t you build an app for that?” An app, I said – that’s ridiculous. There are too many “apps for that,” ugh.
But, he got me thinking that maybe this really was a way to do things differently.
Enter attention bias modification, a technique I study in my lab and that takes the threat bias and turns it on its head. Attention bias modification sounds a little like this:
But I promise you, it’s not. Attention bias modification uses simple computerized techniques to rebalance the scales of attention to create a new habit of preferring and prioritizing the positive over the negative. It is perfectly suited to digital and mobile technology because it’s brief, cheap & easily accessible, and doesn’t require a shrink.
I’ve created an app called Personal Zen that embeds these techniques into an engaging, on-the-go format. Here is how it works – We see both an angry and pleasant sprite quickly pop up in a field of grass. The sprites then burrow down into the field, but only the pleasant sprite leaves a trail of grass. Our task is to trace that windy trail. Because the angry and pleasant sprite appear at exactly the same time, our brain is forced to figure out what to pay attention to. By ALWAYS following the trail of the pleasant sprite, our brains learn to automatically focus on the positive and disengage from the negative. We start building a new habit of attention. Follow the joy.
It’s deceptively simplistic, but clinical trials show that using Personal Zen and the attention bias modification techniques it is based on effectively rewires our brains to disengage from the negative and focus more on the positive – and this translates into reducing stress and anxiety after as little as single use of the app.
The Politics of Technology and Fear
So Personal Zen is a technology-based way to help heal the anxious brain. Yet, I simultaneously believe that the digital technology culture as it stands now is also one of the most surefire ways to amp UP the threat bias and make our anxious brains worse.
We mediate our lives through mobile and digital technology – we know this, it’s how we filter the tremendous complexity of our lives. But we are living in an attention economy in which news organizations, businesses, and our social networks are constantly pinging, ringing, and texting us, competing for our rapidly dwindling bandwidth of attention. We are on a digital mental treadmill. Corporations spend millions figuring out how to best keep us on that treadmill by high jacking and seducing our emotional brains, how to reward us, titillate us, and scare us into looking, clicking, buying, eyeballs on the screen, and how to mine, use, and sell our personal data.
The politics of fear are finding fertile soil in this attention economy, with fear-mongering politicians using these same techniques to drive opinion and votes, to amp up our anxieties and fears. The only good voter is an anxious voter.
The digital mental health field as it stands is not much better. There are thousands of mental health apps on the market, but fewer than 1% have ANY scientific evidence base. So, it’s essentially the Wild West, full of snake oil salesmen. This is tough on us consumers. How do we find the signal in the noise? The FTC’s crackdown on digital brain training companies like Lumosity, which was fined millions for unfounded medical claims, is a sign of the times.
The Future is Now
But let’s turn to the future.
It is crucial that at this key moment in time, we envision a new and revolutionary future for the role of technology in health. That future has to be now, and we have no time to waste. The digital technology culture in which health care is evolving is consciously and relentlessly designed to brain hack, co-opting our anxious brains, our addicted brains, our bored and restless brains. We have to disrupt the digital disruption of our lives.
Don’t get me wrong, the human race has been brain hacking for millennia, shaping and mediating how we view and make sense of reality – through language, religion, the arts, politics, education…. Along come radical advances in digital computing and now we have another tool – but it is a tool that should NOT be privileged above others. And we must take a cold, hard look at how in some contexts, the costs of these digital tools outweigh the benefits, leading to information overload, greater anxiety, and social disconnection.
So I say, let’s step off the digital mental treadmill. We all know ways to do this, ways as simple as silencing the endless rings and buzzes of our notifications, turning off our devices during meals with our family and friends, and minimizing the time as family, parents, and friends, our loved ones see the back of our devices rather than our faces. When we take these steps, we treat our attention as sacred and precious, as a resource to be spent wisely. These values must be front and center when we design and use health technology.
I challenge all of us today to reclaim technology to heal the anxious brain and heal the culture of fear: Designers, help us streamline screen time – less time with eyeballs on the screen – and design technology that facilitate our ability to live truly connected and fulfilling lives; Consumers, demand digital health tools with scientific backing and be conscious of how you’re spending your precious, precious attention; Politicians, draw on the best rather than the worst aspects of the attention economy. The only good voter is an informed voter. If we do these things, together, we will create the tools we need.