Seven years in, the clock won

For the last seven years, California startup Kintsugi has been building AI meant to detect signs of depression and anxiety from a person’s speech. That plan has now collided with one of the more familiar realities of health tech: the FDA does not move at startup speed.

After failing to secure clearance in time, the company is shutting down and releasing most of its technology as open source. Some of that work could still end up being useful outside mental health care, including in the increasingly crowded business of spotting deepfake audio.

What the software was supposed to do

Mental health screening still leans heavily on questionnaires and clinical interviews instead of lab tests or scans. Kintsugi’s system took a different approach. Rather than focusing on what people said, it analyzed how they said it.

The idea is not exactly radical. Pauses, sentence structure, speaking speed and other speech patterns have long been associated with mental health conditions. Kintsugi said its model could detect more subtle changes than a human listener might catch, although the company has not publicly spelled out which features actually drive its predictions. In peer-reviewed research, Kintsugi said its results broadly matched established self-report depression screeners when the system was tested on short speech samples.

The company pitched the technology as a complement, or possible alternative, to tools such as the Patient Health Questionnaire-9, better known as the PHQ-9. That questionnaire is a standard part of primary care and psychiatry, but it still depends on patients accurately describing what they are experiencing. Kintsugi argued that a voice-based model could offer a more objective signal, help screen more people, and scale across health systems, insurers and employer programs.

The catch, of course, is that anything positioned as a medical product has to clear the FDA.

The De Novo route was not built for this

Kintsugi was seeking clearance through the FDA’s De Novo pathway, which is meant for novel, low-risk medical devices that do not already have a direct equivalent on the market. In theory, that sounds like a sensible lane for new software. In practice, it can still mean years of data collection and regulatory back-and-forth.

Kintsugi founder and CEO Grace Chang told The Verge that a lot of time was spent teaching regulators about AI. She also said the framework itself fits AI badly, because it was largely designed around older kinds of devices, such as hip implants, surgical tools and pacemakers, whose designs do not keep changing after approval.

For AI systems, that creates an awkward tension. If the model keeps improving, regulators may want something more fixed. If the model gets frozen in place to satisfy the regulator, the software loses one of the main reasons anyone wanted AI in the first place.

Chang said that even with the Trump administration pushing to cut red tape and speed AI products into the real world, regulatory experts told her there was "nothing that helps them do that except loud yelling from the top." Federal government shutdowns slowed the process further, and the startup eventually ran out of money while waiting for its final submission.

Funding dried up before the finish line

As Kintsugi’s runway shortened, efforts to raise more capital did not go far enough. Chang said the company decided against accepting what she described as "predatory" short-term financing offers just to make payroll. One proposal, she said, would have provided about $50,000 a week in exchange for $1 million in equity.

Instead of taking that route, the team chose to open-source most of the technology so that others could continue the work. Investors, unsurprisingly, were not thrilled.

Open source brings new risks

Making a mental health screening model public raises obvious concerns. Tools meant to identify signs of depression or anxiety could be used outside clinical settings by employers or insurers, where the usual healthcare safeguards would not apply. Once the software is out in the wild, there is very little to stop people from using it in ways the creators never intended. A rare triumph of public release over control, which is not always comforting.

Nicholas Cummins, a senior lecturer in speech analysis and responsible AI in health at King’s College London, told The Verge that open-source releases often do not come with the detailed "paper trail" regulators expect. That includes clear records of how a model was trained, validated and tested for safety. Without that documentation, he said, it could be difficult for a product built on the code to make it through FDA review.

Cummins suggested that most companies would probably use Kintsugi’s model as a starting point and then add their own data and validation. Even then, he warned that voice-based systems remain imperfect. Depression, in particular, can show up differently across people, languages and cultural contexts, and performance depends heavily on the diversity and structure of the speech data used to train the system.

Chang did not dismiss the misuse concerns, but she said "it’s less of a concern in practice than it might appear in theory." The groups most likely to abuse the technology, she argued, are also the ones that would face the highest barriers to actually deploying it. In her view, "the more realistic risk is underuse, not misuse."

One part of the company stays behind the curtain

Not all of Kintsugi’s technology has been released publicly. Chang said some of it is being kept private for security reasons, especially the system that can detect synthetic or manipulated voices.

That capability emerged when the team used AI-generated speech to strengthen its mental health models. The synthetic audio did not contain the vocal signals the system expected, which showed that it could distinguish between human and AI-generated voices. Given the surge in AI spam, fraud and deepfakes, that is a useful trick, and unlike depression screening, it is not subject to FDA oversight.

Chang declined to say what she will do next, or whether the security-focused technology might resurface elsewhere. She said she hopes another team will build on Kintsugi’s work and carry it through the final stages of the FDA process.

Without broader changes, though, Kintsugi’s shutdown is unlikely to be the last case of startup ambition meeting medical regulation and losing track of time. Chang said she hopes the result does not scare other founders away from trying anyway.