This project investigates how communication similarity influences negotiation outcomes, using time-stamped, speaker-identified transcripts to measure semantic alignment. The goal is to understand conversational speech through the lens of similarity to develop interventions that enhance collaboration between negotiators.
A well studied phenemenon: negotiators who speak similarly achieve more collaborative outcomes. Conversely, disparities in speech lead to less favorable results. Is this true at the linguistic level of semantics?
Note
Example ouputs and instructions for running the app continue below. For theory and an in-depth analysis of results please review the experiment results
Scatter plot of semantic similarity which correlates self-similarity (coherence) with partner-similarity (responsiveness)
Based on semantic similarity, 4 general patterns of negotiation are observed, along with their relative strengths
Shows how negotiators selectively respond to what the other negotiator is talking about
Distribution of conversational modes (topics) in a transcript
Comparing how positively people speak about others, themselves etc.
- Install necessary requirements
- Open terminal from the root folder and run python src/app.py
- This will open up a clickable link (host ip address) to a webpage
- From the 'Drag and Drop' button at the top of the webpage, navigate then select "demo_transcript.json"
- Wait a few seconds
- Results of conversation should be visualized
- # for the free meeting assistant software Fathom
- Host a virtual meeting with one other person (currently only tested to work with transcripts up to 2 speakers)
- Save a copy of the transcript as 'your_transcript.txt'
Note
the app uses this parser module to prep the file for parameterization. Best before date: 1/13/2025
- Open terminal from the root folder and run python src/app.py
- Follow the link and use the 'Drag and Drop' button then select 'your_transcript.txt'
- Wait between 15s to a few minutes (may take longer if you're downloading AI models for the first time)
- 'your_transcript.json' will be saved in the same directory
- Results of conversation should be visualized
Given a fathom transcript, the parameterizer module generates a json file with formatted objects:
"id": 35,
"turn": 8,
"name": "D C",
"previous": "B K",
"text": " We are very involved in the eco aspect of the entire operation.",
"airTime": 6,
"wpm": 113,
"qType": null,
"nType": "first",
"topic": "ecosystem",
"topicConfidence": 0.37485435605049133,
"emotion": "neutral",
"emotionConfidence": 0.5431026220321655,
"responseID": 30,
"responseScore": 0.26121166348457336,
"coherenceID": 25,
"coherenceScore": 0.22765521705150604,
"repeatID": 3,
"repeatScore": 0.4523921310901642,
"localMaxDistro": [
20
]
The json objects log data on individual sentences, where each turn represents a block of sentence id's where one speaker speaks.
- airTime = time spoken per person (in seconds)
- wpm = words per minutes; the rate of speaking per person per turn
- question type (open-ended or closed-ended)
- narrative type (first, second, third, passive)
Negotiations involve patterns of self-disclosure, questioning, and acknowledgment, with balanced proportions of these elements indicating healthy collaboration. A rule-based system labels narrative and question types.
Collaborative outcomes are less likely if negotiations focus heavily on third-person statements (e.g., references to people not present). Balanced "I" and "you" statements are preferred, as an imbalance may signal one party dominating or withholding their position.
- topic
- emotion (positive, neutral, negative)
Using sentiment analysis, this compares negotiators' emotional expressions, hypothesizing that shared positivity around specific topics increases the likelihood of agreement. Conflict resolution research supports this, suggesting that de-escalating negative emotions involves acknowledging and aligning with them before reframing to positivity, fostering collaboration and resolution.
The software will indicate how emotions are distributed across various topics. By flagging negative emotion, we can de#terventions to reframe them to positive interests.
- coherence
- responsiveness
- repetition
Coherence measures how self-similar a negotiator's statements are within their turn and compared to their previous turn, calculated as the maximum cosine similarity between statements from the current and previous turns.
Responsiveness measures how similar a negotiator’s statements are to their partner’s previous statements, based on the maximum cosine similarity. Both metrics are derived using a transformer model (ai) to embed semantic meaning into vector space and calculate distances between vectors.
A repetition is defined as the highest scoring similarity statement that is not in the present or previous turn. Strong repetitions indicate highly similar word choice and semantic alignment.
- localMaxDistro
This provides the sentence numbers where similarity scores peak between all sentences from both speakers, across all previous and current turns.
Note
A bibliography and acknowledgements for this project are available in the documentation folder please review the experiment results for more in-depth analysis