Every call on Jargon is recorded, transcribed and analyzed. We generate so much great data, that the transcripts are just the beginning. We do our best to combine what people said with how they said it, to give you a terrific view into the context of a conversation.
Here's a close-up from our Sample Report: https://www.jargon.ai/flash-report/sample.
As you can see, there's a map of the conversation based on who spoke when, so you can quickly see how the conversation flowed and what happened at a specific moment. But the fun doesn't stop there! We also map the mood based on how people's faces looked throughout the call, so you can see what the tone of the call was throughout - just switch the toggle from "Speech" to "Mood" on the upper right.
While you scroll, you can see exactly where you are on the map. Or, you can click on the map to jump right to that part of the conversation.
Reading the Transcript
The transcript is interlaced with highlights that call out what we think of as "interesting" moments in the conversation. Things like questions, keywords, screen shares and follow-ups are just some of the highlights we generate to try and give you more context about your conversations.
When we detect strong emotional signals from participants, we highlight those as blue and expose a "Show Signals" button. Here's an example:
As you can see, we give you that information even when a person wasn't speaking at all, and was just reacting to what others were saying. You can see that Priya was smiling and expressing Joy when John explained what was in the new design. This can be an incredibly powerful tool to get visual cues from your audience during your presentation.
We expose many different signals and elements in these transcripts, and we would love to hear from you on what we can include that would be helpful. Let us know at firstname.lastname@example.org.