What the Algorithm Can't See
AI has changed how Crossbridge works. We are grateful for the efficiency and sheer power of it, the ability to synthesize research, organize complexity, and move faster on things that used to take hours. But over the past few months, we've also accumulated a set of human stories, moments where technology couldn't help, couldn't reach far enough, couldn't substitute for what the work actually requires. This month, Jennifer explores both in a four-part series.
AI & the Work We Do, Part 1 of 4
My daughter lives in San Francisco, works inside the world where AI is being built and debated and reimagined daily, and when she visited recently the conversation turned (as it tends to when she's home) to where technology is heading. I told her what Rebekah and I have been exploring: how we're using AI in our practice, how it helps us synthesize research and move faster on things that used to take hours. She nodded. Then she said something I haven't stopped thinking about: most of what's been written about AI is wrong, and you really have to know how to use it.
She's right. And that's coming from someone who knows. What she meant is this: most of the content about AI was written before people understood it well, by people who still don't, and AI is learning from that content. It is, in a very real sense, trained on its own misinformation. The same problem exists when families use AI to research therapeutic programs. It surfaces whatever has been written, without any ability to evaluate whether what was written is accurate, current, or complete. Garbage in, garbage out. Except the garbage is dressed up in confident, well-organized prose, and there is nothing in the output that tells you not to trust it.
That is not a minor limitation. For families making high-stakes decisions about their children's care, it is the limitation.
The Problem With Using AI for Therapeutic Program Placement
There is something else AI cannot know: not because it hasn't been written yet, but because it can't be. Some things don't survive being put into words. The energy in a room. The feeling of a program. Whether the kids there are living in it or just getting through it. These aren't data points that haven't been captured yet. They're things that can't be captured at all.
What AI Can't Tell You About Therapeutic Programs
Every therapeutic program has a client culture: a peer group, an unwritten social ecosystem that a child walks into on day one. Some programs draw kids who are intellectually intense, wonderfully quirky, a little outside the mainstream. Others skew toward kids who are more mainstream (think lax bro). Neither is wrong, but the wrong peer culture can undermine even the best clinical model, because adolescents don't heal in a vacuum. The same is true of academics. Every program has a school component, but the range is enormous, and whether a child needs academic rigor to feel like himself (or would drown in it) is as clinically significant as anything else in the placement. None of this is written about online. It is too specific, too dynamic, and too sensitive to document. It lives in the knowledge of people who have visited enough programs, over enough years, to have developed a feel for who thrives where and why.
What a Therapeutic Educational Consultant Sees That AI Cannot
And then there is the second problem (the one that no amount of better data could solve).
I was recently in California visiting Key Healthcare, a program serving teenagers navigating some of the hardest seasons of their young lives. I sat down with the founder, Ryan Blivas, and he told me something that has stayed with me. Ryan had been sent to treatment himself as a child. What shaped his entire vision for Key Healthcare (the thing that drives every decision he makes about how his program runs) is that he knows what it feels like not to be seen. Not to be heard. He built something specifically designed to make sure his clients never feel that way.
No algorithm surfaced that for me. No database contains it. I was sitting across from him, listening, and I felt the weight of it.
This is what the work looks like in practice. Not reading a brochure. Walking the halls. Checking behind the toilets and pulling back the shower curtains (yes, I do that) because mold and neglect tell you something that marketing materials never will. Asking staff what they love about working there and watching their faces when they answer. Watching the kids. Are they engaged or disengaged? Bright-eyed or dulled out? You can feel whether kids in a program are living in it or just surviving it. You can feel whether the people running it genuinely care (about their work, about these particular teenagers, about what happens to them after they leave).
Nobody writes about that. You cannot write about that without being there. And you cannot be there meaningfully without knowing what you're looking for.
That work requires a person who has spent years learning how to see. And then it requires them to actually show up and look.
Jennifer is a therapeutic educational consultant and MSW, and partner of Crossbridge Consulting. She and her partner Rebekah Jordan work with families navigating complex educational and therapeutic placements for children, adolescents, and young adults. This is the first in a four-part series on AI and the irreplaceable human work at the heart of their practice.