This demo shows how to build and run an AI meeting assistant based on Daily's transcription API alongside an embedded Prebuilt call.
The server component uses Daily's Python SDK to join any Daily room with a bot assistant. The server component configures an AI assistant (in this case powered by OpenAI) for each session. Each incoming transcription line is stored. At regular intervals, raw transcription lines are cleaned up through an OpenAI request. The clean transcript is accessible to the client for display and is also used as the context for subequent meeting summary and custom queries.
When a session is queried via an "app-message"
event, the Python assistant bot uses the stored transcription lines to generate a response from the OpenAI assistant.
- Sign up for Daily
- In your dashboard, unlock Daily's usage-based pricing upgrade. You will receive a $15 credit to test Daily's features, including real-time transcription.
- Create a Daily room to use for the demo.
- Sign up for OpenAI and retrieve an OpenAI API key.
In the root of the repository on your local machine, run the following commands:
python3 -m venv venv
source venv/bin/activate
In the virtual environment, run the following to install requirements and run the server:
pip install -r server/requirements.txt
quart --app server/main.py --debug run
In another terminal window, run the following:
- Navigate to the client directory:
cd client
- Install dependencies with
yarn install
- Start the dev server with
yarn dev
Open the displayed localhost port in your browser. Fill in your Daily room URL, Daily API key, and OpenAI API key.
If you'd like to use the AI assistant bot with your own client implementation, you can start it as follows:
python -m server.call.session --room_url "YOUR_DAILY_ROOM_URL" --oai_api_key="YOUR_OPENAI_API_KEY"
Run python -m server.call.session --help
for a full list of options.
All transcription lines are currently stored in memory. In a production environment, consider using a more scalable storage solution.
For a production use case, optimizations can be made for how context is stored and updated. For example, context can be strategically batched and discarded when no longer required. The appropriate approach will depend on your use case.