The Water Level: Disability and Technology

When driving comes up, I tense.

It doesn’t happen every time, but when we’re with other families and someone mentions permits or practice drives or the freedom of having a new driver in the house, something tightens in me. I watch my son. He doesn’t say anything. He just goes quiet and waits for the conversation to move somewhere else. I usually help it along.

I don’t know exactly what he’s thinking in those moments. I don’t ask. Maybe it’s nothing. Maybe it’s everything. But I was in that exam room when his neurologist answered his question about driving, and I know what “probably not” sounds like when it lands.

His friends are getting permits. He’s watching that happen from the outside, the way he watches a lot of things.

I think about his future more than I let on to him. I think about medications they haven’t discovered yet. Therapies. Devices. I think about his independence — what it might look like, what it might require. Autonomous vehicles have been part of that thought for a while. Not as a certainty, just as a possibility worth holding onto. So when I came across a podcast recently about autonomous vehicles and what they might mean for people who can’t drive, I expected something that confirmed what I’d been quietly hoping. Instead it pulled in two directions at once.

The first part was about job displacement — the ways AI is already eliminating work, particularly at the lower end. Automation moving through the kinds of jobs that don’t require a degree or specialized training. The ones with structure and repetition. Then the second half shifted to autonomous vehicles and the disabled community. The argument was straightforward: people who can’t drive because of a medical condition, a physical limitation, age — autonomous vehicles could give them something they don’t currently have. Independence. The ability to get somewhere on their own.

And then someone in the episode pointed out that the disabled community was being used to make the case for technology that primarily serves other interests. That the promise of accessibility was real but also convenient. I don’t know where the truth lands on that. Probably somewhere uncomfortable.

My son is sixteen. He wants to be a hockey player or a streamer. Neither is straightforward. Hockey as a player isn’t realistic, though being involved in the sport in some other way might be possible someday. Streaming is something he genuinely enjoys, but it requires consistency, memory, sequencing — things that are hard for him right now, harder than they look from the outside. He has dreams the way any teenager has dreams. He just has more walls. And the jobs most likely to be within reach for him — the ones with structure, repetition, and the right support in place — are the same ones that automation is already eliminating.

I’ve worked in AI for more than a decade. I use it every day. The work it’s making easier is white collar work — the kind that requires education, executive function, the ability to synthesize and decide. The jobs it’s eliminating are the ones that could work for him.

The water level keeps rising. He’s already underwater.

That’s the part I can’t think my way out of. Autonomous vehicles might eventually give him a way to get to a job on his own. That would matter. That would be real. But if the job itself has been replaced by the time the technology arrives, the independence doesn’t have anywhere to go.

I don’t know how to hold both of those things. I’m not sure I’m supposed to yet.

What I keep coming back to is that exam room. His neurologist exhaled before she answered. My son sat there and took it without flinching. Part of him probably already knew. Part of him was hoping for a different answer.

He’s been doing that his whole life — absorbing the gap between what other kids have and what’s available to him. Sitting quietly while the conversation moves on. Waiting.

I don’t know what the world looks like when he’s thirty. I don’t know which promises will have been kept and which ones will have turned out to be convenient. I don’t know if the door that technology seems to be opening will still be open, or what will be on the other side of it if it is.

I just know he’s sitting with questions he shouldn’t have to sit with at sixteen.

And I know what it looks like when he goes quiet.


I also wrote about this topic from a different angle on davidmonnerat.com, where I explore the structural side of the question — who technology is built for, who it displaces, and why those two groups are often the same people. You can read that piece here: The Other Hand: AI, Disability, and the Cost of Progress.

A Song of His Own

“Dad, I made a song.”

That was the first thing my son said to me when I got home from work.

“That’s cool, pal!” I responded, thinking he had jotted down a few lyrics to show me.

“Do you want to hear it?” he asked.

Hear it, I thought. Interesting. “Of course!” I said, following him to his room.

I sat on the corner of his bed as he went to the computer.

“Ready?”

I nodded.

He hit play, and from his speakers came an actual rock song. Drums. Bass. Electric guitar. And a vocalist singing about the Colorado Avalanche (my son’s team) defeating the Tampa Bay Lightning (my team) in the NHL Stanley Cup Finals in 2022, the year we were in Colorado and went to a finals game. A game that, as my son constantly reminds me, the Avalanche won 7-0 on their way to hoisting the cup.

As I listened to the song, I watched the smile on my son’s face, especially when the lyrics touched on the game we attended, continued to widen—the smile of pride, connection, and love. It’s the single best sight that I will ever see.

Tampa’s thunder tried to fight,
But Colorado owned the night.

When the song finished, I stared with my jaw dangling open, which caused his smile to grow even wider.

“How?” I asked.

And he walked me through his process, prompting an AI tool with styles, themes, and concepts until he had a completed song.

“Well,” I said. “This has to be on Spotify.”

“Really?” he asked, his voice caught somewhere between disbelief and excitement.

“Really,” I confirmed. “I’ll figure out how to get it distributed so that everyone can hear it.”

For all the challenges my son has, his creativity and ability to figure things out are truly inspiring. When my wife and I were discussing her next book, my son decided to write a Fortnite Tips book, complete with an illustrated title. He gets inspired by videos of his favorite players and builds giant arenas and stadiums in Minecraft—sometimes following tutorials, other times just experimenting until it works. And now, he figured out how to make a song.

It could have been so easy for him to let obstacles define him. To look at the world through the lens of what isn’t possible. But he doesn’t. He assumes everything is possible, and then he goes and proves it. As a parent, it’s more than I could have ever wished for him.

A few weeks later, I went into his room and showed him my phone. I had the Apple Music app up and, ready to play, was the hit new song from the artist neurodefender titled “Avalanche Rising.”

We sat together and listened to it again. He gave me the same look and smile as the lyrics recounted the Avalanche victory. He grabbed his phone and pulled the song up on Spotify, replaying it for the rest of the night. When he joined his friends online, I could hear him telling them about his song, too.

And in that moment, I realized something: no matter the struggles, no matter the setbacks, my son keeps finding ways to make his voice heard. Sometimes literally. Always beautifully. And I’ll never stop listening.

The Other AI: Autonomy and Influence

My son has been asking more frequently about living by himself. We’ll have a talk about independence and responsibility, and loosely talk about goals to help him move in that direction. But I also watch as he struggles to remember whether he had taken his medication, or put on deodorant, or pull his sheets up when he makes his bed.

As I watched him try to piece it together, I thought about the technology that I work with and whether it could help him.

I’ve been involved with computers and technology for most of my life, building products with bits and bytes of code and data. For the past ten years, I’ve worked in the evolving field of artificial intelligence (AI).

I recognized early on that AI could potentially transform my son’s life. As the technology matured, I watched it advance the state of medicine and healthcare.

Today, AI algorithms power diagnostic tools, accelerating the time to detect, identify, and treat complex medical conditions. AI is accelerating drug discovery, helping researchers identify promising treatments faster than ever before. It is also being used to examine genetic data to identify the right medication and dosage for individual patients.

AI could improve his quality of life in ways that weren’t possible only a few years ago. Pattern recognition can alert us when he misses a medication or a meal. Personal assistants can provide reminders, keep him on task, and communicate with him in a way that he understands. Self-driving cars will give him mobility and access to a wider world. AI-driven tools can assist him with complex tasks, help him communicate ideas, and give him greater autonomy and independence.

That’s the promise and the potential.

But here’s the problem. We live in a world where AI is already causing harm.

Inherent challenges with the technology, especially with generative AI (e.g., ChatGPT), result in hallucinations where the algorithm makes things up. The black-box nature of these algorithms makes them unpredictable and impossible to test fully, resulting in harmful behavior. And these algorithms are owned by corporations who control the data, usage, and output and can tune it to fit their agenda.

Beyond technology, people have been using these tools for nefarious purposes. It’s easy to create a false but believable story and share it on social media. It’s also easy to create completely believable but fake images and videos to mislead viewers. These bad actors are using the technology to push false narratives and generate mistrust and dissent in society.

My son struggles with memory and executive functioning. It impacts his ability to reason and determine whether what he is reading is fact or opinion, truth or lies. While I think society at large has lost its ability to thing critically, people like my son are especially susceptible to these false narratives and the harm they can cause.

So while I’m building the future with AI, I’m also guarding the present for my son. I want him to have access to all the promise this technology offers — the support, the independence, the chance to live on his own — without falling victim to its dangers. I have to be his guide, his filter, and his advocate.

Because while AI might one day help him remember his medication or build a career, it won’t teach him who to trust, what’s real, or what truly matters. It’s my job to walk beside him, protect him, and help him make sense of a world that’s changing faster than any of us can keep up with.