Simple Speech & Text Emotion API
for Developers

Quickly integrate DeepAffects' Secure, scalable, & highly accurate  data-rich speech & text models

Deep Affects APIs enable developers to surface measurable metrics on emotions embedded in existing content by applying powerful machine learning models in an easy to use REST API.

Denoising API: 
Signal vs. Noise

Media recordings are susceptible to noise. Noise embedded in audio files can be random or white noise. Denoising algorithms are used to remove the noise. Look and listen at the sample audio clips with corresponding outputs displayed below:

E9ECF5.png

Speaker Diarization API:
Who spoke when?

Speaker recognition/diarization is the identification of an individual person based on characteristics found in the unique voice qualities. In an audio recoding with multiple speakers (conference call, dialogs etc.), the Diarization API identifies the speaker at precisely the time they spoke during the conversation. On the left is an audio recording of a debate, the image shows the cluster generated based on the speech pattern and precise time the speaker participated in the conversation.

CCD6EB.png

Emotion Recognition API:
If Emotions Could Talk

Emotion Recognition API identifies emotions from paralinguistic properties of speech (without text based references). Some of the emotions extracted are anger, stress, disgust, etc. Below are the identified emotion metrics extracted from the given audio clip.

A3C3D9.png

Paralinguistic API:
More Than Words

Paralinguistic API provides features such as pitch, rate, amplitude, shimmer, tempo, mel-frequency cepstral coefficients and more with measurable metrics. From the following audio clip, we have extracted the pitch & tempo features with measurable metrics that inform EI attributes.

CCD6EB.png

Custom API: 
One Size Doesn't Fit All

We've come across a number of prediction use cases with needs that don't quite align with what Deep Affects standard offers. That said, we're providing a custom solution that learns to recognize patterns in your data. This means that you can not only apply our pre-trained models to a specific use case but you'll be training on the most relevant data available.

E9ECF5.png

Developer Portal

Easy step-by-step guide to integrate the speech & text api for developersIntegrate with secure & scalable speech & text APIs, humanize your communication data and put insights to work.